Ok I experienced this.
I went on a post, the multiple apps & screenshot, then came back to Liftoff and it brought me back to the feed refreshed, instead of staying on the post.
One issue with learning and training, is that you'll have the same limitations as now. You are still human, just connected to a machine and time cannot accelerate to learn faster.
However if we could move, change time to whatever place we want, create whatever we want. And still look real.
Then that would maybe make something very interesting for learning and training. It wouldn't be faster. But for example a teacher would be able to create a world where they can help the students learn better, with images, simulations, stories...
However that may also create some issues where it wouldn't be wise to recreate wars, death and other things which can be shocking for people. Because of that realism, it would be very hard to distinguish between a simulated war/death and a real one.
Tho it would maybe create a huge benefit for training for flying a plane for example. Cheap and no risks to break anything.
Can ChatGPT be easily distinguished from a real person (if it doesn't say it's an AI)?
It is still possible, but not easy (also it is getting worse with time). Tho that doesn't make it a person. However we don't yet have the tech capable of making an entire person just in AI. But if we had it.
Your concerns may very well be a good point. But these AI humans, may not be considered persons if we suppose current tech enhanced.
However another moral issue is : let's say there is an AI human in there, and the player falls in love with it.
Is the player marrying a person or an AI? From his perspective it could very well be a person.
But from another ones perspective it would be an AI.
How would other people need to treat such AI? As a person? Not as a person? How awkward would it be?
Then another one (if everything looks and feels as the real world) : AI humans in there wouldn't be considered as people. Would that mean that you can enslave them? Commit "crimes" (and other considered "bad" things) as they are not considered people? If they look and act like real people is it moral to do such thing?
I did not read the book, but I can imagine it being interesting for a bit.
I don't know how would someone react to something like this however.
Maybe it can become meaningless, tho maybe if people still need to get into the real world to work, maybe it would become a way to escape the real world.
Which would make that once you get out in the real world, life may seem bad and depressing compared to that virtual world.
It would maybe generate undesirable effects and people would be in that reality for days (ex : what was imagined in Ready Player One).
Create an increase in depressions and suicide rates...
Ready Player One
Matrix
And maybe others but I don't know.
It would be extremely hard to resist. Such tech may be expensive, tho it could still be owned by poorer people once it decreases in cost, as it would allow to escape their poorness.
Tho because mostly companies will do things like that, I mostly see something like in Ready Player One. Where you have a giant social network/game, where you can participate in plenty of different activities which can look like the real world, or not.
The Matrix version where you are in a world filled with "real people"/AI, where you have the same world but have some super powers, well not really sure.
Do you really want to have powers, what to do with them?
It's also difficult to get a world like that. Social interactions are pretty much needed for most people. Even if these people don't see it directly, getting out, buying something, it's social interaction.
If those AI people aren't good, the experience would most likely be mediocre because of the objective it implies (recreating a similar world but where you can do anything).
Tho maybe if it is used as a game, maybe it could interest more people.
However it would enphasis the social distancing of many people and break many things.
This is why I'd rather see it as like a social media/game universe.
Another issue in the question now is well, there is no such thing. So it's difficult to even know if it would be interesting or not. Would we be absorbed all day in it like people were in Ready Player One?
Will companies try to control us? Make us buy things?
I have no idea what reporting does. Not even if it is sent.
I tried reporting some nudity content in a community, but it was broken en 2 apps, and took a very long time from the browser...
Well companies could still be liches. If what they are building around your software isn't a direct derivative (which it will most likely not be), then companies will still be able to publish their closed source work, while mentioning your open source software (if they even need to).
They benefit from your code and give nothing in return.
This can also be true if the companies use your software without redistributing it. They will just use your software and never give anything back.
In a very badly and short as possible way, they are very complex probabilities machines, which can compare the probability of each words in your sentence to chose the words to say to you.
however from what I remember I think the article has more info on how the tool manages to reduce confusion and history on the evolution with gpt 1, 2 and 3.
But what helped me understand easier was the video, even if it doesn't describe every thing to the tiniest detail.
It may be even worse as you said, however AI currently is more present in the news and maybe easier to understand because of this.
Also chatgpt had a huge amount of personal info leaked to the dark net, not really because they got hacked, but because the users put their login credentials into fishing websites.
But also, as any thing you input into chatgpt/Bing chat/bard is scanned, it can also be a big antitrust/corporate espionage as openai/microsoft and Google may be able to spy on any users who may leak the development of another AI.
I can confirm the price with a bit of variation 21.99€ for me.