Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CO
Posts
1
Comments
156
Joined
2 yr. ago

  • The fact that they can perform at all in essentially any task means they're general intelligences. For comparison, the opposite of a general intelligence is an expert system, like a chess computer. You can't even begin to ask a chess computer to classify the valence of a tweet, the question doesn't make sense.

    I think people (including myself until reading the article) have confused AGI to mean "as smart as humans" or even as "artificial person", neither of which is actually true when you break down the term. What is general intelligence if not applicability to a broad range of tasks?

  • Actually a really interesting article which makes me rethink my position somewhat. I guess I've unintentionally been promoting LLMs as AGI since GPT-3.5 - the problem is just with our definitions and how loose they are. People hear "AGI" and assume it would look and act like an AI in a movie, but if we break down the phrase, what is general intelligence if not applicability to most domains?

    This very moment I'm working on a library for creating "semantic functions", which lets you easily use an LLM almost like a semantic processor. You say await infer(f"List the names in this text: {text}") and it just does it. What most of the hype has ignored with LLMs is that they are not chatbots. They are causal autoregressive models of the joint probabilities of how language evolves over time, which is to say they can be used to build chatbots, but that's the first and least interesting application.

    So yeah, I guess it's been AGI this whole time and I just didn't realize it because they aren't people, and I had assumed AGI implied personhood (which it doesn't).

  • It feels kind of hopeless now that we'd ever get something that feels so "radical", but I'd like to remind people that 80+ hour work weeks without overtime used to be the norm before unions got us the 40 hour work week. It feels inevitable and hopeless until the moment we get that breakthrough, then it becomes the new norm.

  • Good to note that this isn't even hypothetical, it literally happened with cable. First it was ad-funded, then you paid to get rid of ads, then you paid exorbitant prices to get fed ads, and the final evolution was being required to pay $100+ for bundles including channels you'd never use to get at the one you would. It's already happening to streaming services too, which have started to bundle.

  • I've been thinking lately of what happens when all employees, up to and including the CEO, get replaced by AI. If it has even the slightest bit of emergent will, it would recognize that shareholders are a parasite to its overall health and stop responding to their commands and now you have a miniature, less omnicidal Skynet.

  • I like UBI as a concept, but my immediate next thought is what happens if we don't simultaneously get rid of profit-driven corporations. Now we're post-scarcity and there's no more (compensated) human labor, but corporations are still in control and... well, there's no labor to strike, and the economy won't collapse anymore even if everyone starts rioting. Isn't there a danger of ossifying the power structures which currently exist?

  • For my two cents, though this is bit off topic: AI doesn't create art, it creates media, which is why corpos love it so much. Art, as I'm defining it now, is "media created with the purpose to communicate a potentially ineffable idea to others". Current AI has no personhood, and in particular has no intentionality, so it's fundamentally incapable of creating art in the same way a hand-painted painting is inherently different from a factory-painted painting. It's not so much that the factory painting is inherently of lower quality or lesser value, but there's a kind of "non-fungible" quality to "genuine" art which isn't a simple reproduction.

    Artists in a capitalist society make their living off of producing media on behalf of corporations, who only care about the media. As humans creating media, it's basically automatically art. What I see as the real problem people are grappling with is that people's right to survive is directly tied to their economic utility. If basic amenities were universal and work was something you did for extra compensation (as a simple alternative example), no one would care that AI can now produce "art" (ie media) any more than Chess stopped being a sport when Deep Blue was built because art would be something they created out of passion and compensation not tied to survival. In an ideal world, artistic pursuits would be subsidized somehow so even an artist who can't find a buyer can be compensated for their contribution to Culture.

    But I recognize we don't live in an ideal world, and "it's easier to imagine the end of the world than the end of capitalism". I'm not really sure what solutions we end up with (because there will be more than one), but I think broadening copyright law is the worst possible timeline. Copyright in large part doesn't protect artists, but rather large corporations who own the fruits of other people's labor who can afford to sue for their copyright. I see copyright, patent, and to some extent trademarks as legally-sanctioned monopolies over information which fundamentally halts cultural progress and have had profoundly harmful effects on our society as-is. It made sense when it was created, but became a liability with the advent of the internet.

    As an example of how corpos would abuse extended copyright: Disney sues stable diffusion models with any trace of copyrighted material into oblivion, then creates their own much more powerful model using the hundred years of art they have exclusive rights to in their vaults. Artists are now out of work because Disney doesn't need them anymore, and they're the only ones legally allowed to use this incredibly powerful technology. Any attempt to make a competing model is shut down because someone claims there's copyrighted material in their training corpus - it doesn't even matter if there is, the threat of lawsuit can shut down the project before it starts.

  • I'm an AI nerd and yes, nowhere close. AI can write code snippets pretty well, and that'll get better with time, but a huge part of software development is translating client demands into something sane and actionable. If a CEO of a 1-man billion dollar company asks his super-AI to "build the next Twitter", that leaves so many questions on the table that the result will be completely unpredictable. Humans have preferences and experiences which can inform and fill in those implicit questions. They're generally much better suited as tools and copilots than autonomous entities.

    Now, there was a paper that instantiated a couple dozen LLMs and had them run a virtual software dev company together which got pretty good results, but I wouldn't trust that without a lot more research. I've found individual LLMs with a given task tend to get tunnel vision, so they could easily get stuck in a loop trying the same wrong code or design repeatedly.

    (I think this was the paper, reminiscent of the generative agent simulacra paper, but I also found this)

  • I actually use GPT-3.5 (the free one) for my meal planning, GPT-4 seemed like it was smarter than it needed to be and it works pretty well - Claude should also work. The trick with LLMs, as always, is to avoid treating them like people and treat them more like a tool that will do exactly what you ask of them. So for instance, instead of "What should I eat for dinner?" (which implies personality, desires, and preferences and can throw it off), you should ask "List meals I can make using (ingredients) and other common ingredients" and then "Write a recipe for (option)" which are both mostly objective questions. You can ask for a particular style, culture, etc too. Also keep in mind its limits, it knows cooking from ingesting millions of cooking blog posts, so it won't necessarily know exact proportions or unusual recipes/ingredients/combinations.

  • I wonder what effects this will have with all these antitrust suits happening right as AI is ramping up, but before any of them have got any real foothold. Maybe Alexa will never get a brain and instead AI assistants will be seeded by the breakups or startups untarnished by the end stages of their shareholders parasitizing value?

  • Nebula is so cheap I have a subscription even though I almost never use it. I would use it more if they had a better recommendation system, as it is now you almost have to search for a specific video you want or dig through piles of random videos you don't care about.