Skip Navigation

Posts
27
Comments
493
Joined
2 yr. ago

  • There's a place for AI in NPCs but developers will have to know how to implement it correctly or it will be a disaster.

    LLMs can be trained on specific characters and backstories, or even "types" of characters. If they are trained correctly they will stay in character as well as be reactive in more ways than any scripted character could ever do. But if the Devs are lazy and just hook it up to ChatGPT with a simple prompt telling it to "pretend" to be some character, then it's going to be terrible like you say.

    Now, this won't work very well for games where you're trying to tell a story like Baldur's Gate... instead this is better for more open world games where the player is interacting with random characters that don't need to follow specific scripts.

    Even then it won't be everything. Just because an LLM can say something "in-character" doesn't mean it will line up with its in-game actions. So additional work will need to be made to help tie actions to the proper kind of responses.

    If a studio is able to do it right, this has game changing potential... but I'm sure we'll see a lot of rushed work done before anyone pulls it off well.

  • It really depends on the subject.

    If it's programming/hardware in general then there's not much debate.

    But when it comes to discussing "buzz words" or other hot topic items (cryptocurrency or AI/ML Models) then there will be a lot more debates.

  • Are you saying "No... let's not advance mathematics"? Or... "No, let's not advance mathematics using AI"?

  • Just wait till someone creates a manically depressed chatbot and names it Marvin.

  • On Android here: the video plays on the comment from brbposting but there is no audio and no controls for play/pause, mute/unmute, etc.

  • After reading through that wiki, that doesn't sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

    Contrary to your statement, Amazon isn't selling this as a means to "pretend" to do AI work, and there's no evidence of this on the page you linked.

    That's not to say that this couldn't be used to fake an AI, it's just not sold this way, and in many applications it wouldn't be able to compete with the already existing ML models.

    Can you link to any examples of companies making wild claims about their product where it's suspected that they are using this service? (I couldn't find any after a quick Google search... but I didn't spend too much time on it).

    I'm wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that's necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).

  • I don't think that "fake" is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being "powered by AI" or some other nonsense.

    Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

    This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn't mean that the AI is "fake". Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

    Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn't get you the kind of data that you get when you actually put it into a real world environment.

    In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.

  • This would actually explain a lot of the negative AI sentiment I've seen that's suddenly going around.

    Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

    He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

  • It becomes easy to do something like this once we start vilifying others and thinking that they "deserve it".

    In this case according to the man that threw water, the homeless person had a history of sexual harassment and being violent towards the attendees.

    We see this all the time in politics. We're so used to attacking the other side verbally that when one side says something offensive to the other side, physical fights can break out.

    Image of apology here:

  • Do you have a source for those scientists you're referring to?

    I know that LLMs can be trained on data output by other LLMs, but you're basically diluting your results unless you do a lot of work to clean up the data.

    I wouldn't say it's "impossible" to determine if content was generated by an LLM, but I agree that it will not be reliable.

  • But what app did you use to access OSM and download the maps for offline use... was it a web browser? OsmAnd? Vespucci?

  • Says the person using light mode!!

    (On a serious note, upvotes and downvotes mean different things to different people. That's just their own opinion and that's okay. But if you are bothered by downvotes I would use a Lemmy instance that hides the downvotes entirely.)

  • Looks like he instantly got VAC banned with that triple headshot?

  • This video should have more accurately been labelled, "Things that make AI Look Bad" rather than attempting to prove that AI was faked.

  • I would be careful trusting everything said in this video and taking it at face value.

    He touches on a broad range of different AI related news, but doesn't seem to fully grasp the technology himself (I'm basing this statement on his "evidence" from the 8 min mark).

    He seems to be running a channel that's heavily centered on stock market related content. And it feels like he's putting his own spin on every topic he touches in this video.

    Overall, it's not the worst video, but I would rather base my information from better informed sources.

    What he should have done was to set the baseline by defining what AI actually is and then proceed to compare what these companies are doing with that definition. Instead we have a list of AI news stories covering Amazon Fresh Stores, Gemini, ChatGPT, and Copilot (powered by ChatGPT) and his own take on how those stories mean that everything is faked.

  • That makes sense, but I haven't seen any official announcement from Steam saying that they did this. Only speculation from random people. Any documentation I can find just seems to point to this being a decision that's made by the company releasing the game (or in this case Sony as the publisher).

    Besides, only a few hours ago 3 new countries were added to the restricted list: https://steamdb.info/sub/137730/history/?changeid=23492083

    I doubt that Steam is still trying to block additional countries given that Sony has already announced that the PSN account requirement is being withdrawn.