It's been almost a year and it still works fine. I even set it up as a dev account to sideload apps and make calls to the wit.ai api in a few projects.
Except that the information it gives you is often objectively incorrect and it makes up sources (this happened to me a lot of times). And no, it can't do what a human can. It doesn't interpret the information it gets and it can't reach new conclusions based on what it "knows".
I honestly don't know how you can even begin to compare an LLM to the human brain.
You do realize that AI is just a marketing term, right? None of these models learn, have intelligence or create truly original work. As a matter of fact, if people don't continue to create original content, these models would stagnate or enter a feedback loop that would poison themselves with their own erroneous responses.
I know... Are you the guy from the tumblr reading comprehension memes?