Elon Musk admits X 'may fail, as so many have predicted'
ConsciousCode @ ConsciousCode @beehaw.org Posts 1Comments 156Joined 2 yr. ago
I legitimately thought it was Bernie for a second and my heart stopped, like "oh no he's finally gone senile"
The US has white supremacy in its very bones, we can paper over it but we'll see it spring up even hundreds of years later. Note that if you do like the US I don't think that makes you a white supremacist, clearly people are able to compartmentalize them successfully.
That sounds like a pain - surely there's a shorter length that's still strong enough that it can't be cracked in a trillion years?
I don't think crypto is dead, I think fintech's usage of crypto is dead. They came in and ruined what could've been a unique and revolutionary idea by making prospective currencies into speculative assets. We might see it reemerge in 10 years with capitalists and right-libertarians staying as far away as possible because they (hopefully) learned their lesson. The point of a currency is as a medium to store and exchange value, but the initial spike in fiat value turning 12 bitcoins from $0.12 to $12000 and attracted investors, get rich schemers, and scam artists (but I repeat myself). It doesn't help that it was designed to have negative inflation, so people were incentivized to hoard and bet on the market's volatility, and there was no organization dedicated to keeping it stable like the Fed. Then alternatives to PoW like PoS came about which further incentivized hoarding and centralization (you lose stake if you spend, so don't spend).
What people miss out on with all the hate about crypto (though the culture around it deserves a lot) is that the technology itself is potentially incredibly useful. Bitcoin was a first crack at the "Byzantine General's Problem", essentially how to coordinate a totally trustless and decentralized p2p network. Tying it to money was an easy way to get an incentive structure, but for applications like FileCoin it could just as easily allow for abstracted tit-for-tat services (in their case, "you host my file and I'll host yours"). Stuff like NFTs have less obvious benefit, but the technology itself is a neutral tool that could see some legitimate use 20 years in the future like, say, a decentralized DNS system where you need a DHT mapping domains to IPNS hashes with some concept of ownership. Collectible monkeys are not and never were a legitimate use-case, at least not at that price point.
First I'd like to be a little pedantic and say LLMs are not chatbots. ChatGPT is a chatbot - LLMs are language models which can be used to build chatbots. They are models (like a physics model) of language, describing the causal joint probability distribution of language. ChatGPT only acts like an agent because OpenAI spent a lot of time retraining a foundation model (which has no such agent-like behavior) to model "language" as expressed by an individual. Then, they put it into a chatbot "cognitive architecture" which feeds it a truncated chat log. This is why the smaller models when improperly constrained may start typing as if they were you - they have no inherent distinction between the chatbot and yourself. LLMs are a lot more like broca's area than a person or even chatbot.
When I say they're "general purpose", this is more or less an emergent feature of language, which encodes some abstract sense of problem solving and tool use. Take the library I wrote to create "semantic functions" from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is
python
@semantic def list_people(text) -> list[str]: '''List the people mentioned in the given text.'''
a year ago, this would've been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a massive dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. With LLMs, I just tell the AI to do it and it... does. Just like that. I can ask it to do anything and it will, within reason and with proper constraints.
GPT-3 was the first generation of this technology and it was already miraculous for someone like me who's been following the AI field for 10+ years. If you try GPT-4, it's at least 10x subjectively more intelligent than ChatGPT/GPT-3.5. It costs $20/mo, but it's also been irreplaceable for me for a wide variety of tasks - Linux troubleshooting, bash commands, ducking coding, random questions too complex to google, "what was that thing called again", sensitivity reader, interactively exploring options to achieve a task (eg note-taking, SMTP, self-hosting, SSI/clustered computing), teaching me the basics of a topic so I can do further research, etc. I essentially use it as an extra brain lobe that knows everything as long as I remind it about what it knows.
While LLMs are not people, or even "agents", they are "inference engines" which can serve as building blocks to construct an "artificial person" or some gradiation therein. In the near future, I'm going to experiment with creating a cognitive architecture to start approaching it - long term memory, associative memory, internal thoughts, dossier curation, tool use via endpoints, etc so that eventually I have what Alexa should've been, hosted locally. That possibility is probably what techbros are freaking out about, they're just uninformed about the technology and think GPT-4 is already that, or that GPT-5 will be (it won't). But please don't buy into the anti-hype, it robs you of the opportunity to explore the technology and could blindside you when it becomes more pervasive.
What would AI have to do to qualify as "capable of some interesting new kind of NLP or can create something entirely new"? From where I stand, that's exactly what generative AI is? And if it isn't, I'm not sure what even could qualify unless you used necromancy to put a ghost in a machine...
It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of "race blindness", where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there's a therapist AI (not ideal but mental health is horribly understaffed and most people can't afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.
Techniques like "constitutional AI" and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model's attitudes towards that afterwards.
I like to say "they're consistently biased". They might have racial or misogynistic biases from the culture they ingested, but they'll always express those biases in a consistent way. Meanwhile, humans can become more or less biased depending on whether they've eaten lunch yet or woke up tilted.
It makes me really sad because the techbros are a cargo cult with no understanding of the technology, and the anti-AI crowd is an overcorrection to the techbro hype train which overemphasizes the limitations without acknowledging that this is the first generation of general-purpose AI (distinct from AGI). Meanwhile I, someone who's followed the AI field for 10 years waiting for this day, am overjoyed by the near miracle that is a general-purpose model that can handle any task you throw at it and simultaneously worried this yet-another-culture-war will distract people screeching about utopia vs skynet while capitalists use the technology to lay everyone off and send us into a neotechnofeudal society where labor has no power instead of the socialist utopia where work is optional we deserve...
The problem is focus. This is a bit like a building flooding and breaking out the mop while gallons are still pouring in - you'll need that mop eventually, but right now there are much more important things that need your attention.
I'm not sure it should be illegal, since it can be legitimately useful, but maybe something like "inconclusive evidence that isn't enough to grant a warrant". That way, you can get a list of potential suspects but you don't end up violating rights by issuing undue warrants.
Facial recognition should always be a clue, never evidence. It should have the same weight as eyewitness testimony, because the algorithms will always have personal biases from their dataset. Otherwise, we risk lawyers saying stuff like "the algorithm gives a 99% confidence this is you" and the jury thinks this is some objective measure. Meanwhile, the algorithm only has 1% BIPOC in its dataset and labels with high confidence lots of them as being the same person.
Reminds me of the movie Anon, with this jaw-dropping quote at the end: "It's not that I have something to hide. I have nothing I want to show you."
Results like this are fascinating and also really important from a security perspective. When we find adversarial attacks like this, it immediately offers an objective to train against so the LLM is more robust (albeit probably slightly less intelligent)
I wonder if humans have magic strings like this which make us lose our minds? Not NLP, that's pseudoscience, but maybe like... eldritch screeching? :3c
That is single-handedly causing the downfall of Western Civilization (TM) /s
(imagine queers being that powerful lmao)
Didn't Florida just announce they were going to use PragerU videos in their curriculum? They might be getting to calling themselves a real university sooner rather than later...
That's fair, also congratulations. Idk if I would count that towards contributing to the internet though, since it's all within their walled garden on their own terms. It's helpful for people, but only insofar as it helps Google. 10 years ago I might be less critical since they were still in their "don't be evil" phase and creating open source projects like Android left and right, something they're evidently regretting now and trying to lock down using propriety core apps. It's also worth noting Google's AI employees authored "Attention is all you need", the paper which laid the groundwork for modern Transformer-based LLMs, though that's an architecture and not a full model or code.
If only there were some kind of open AI research lab lmao. In all seriousness Anthropic is pretty close to that, though it appears to be a public benefit corporation rather than a nonprofit. Luckily the open source community in general is really picking up the slack even without a centralized organization, I wouldn't be surprised if we get something like the Linux Foundation eventually.
That's late-stage capitalism for you – even revolution comes with a subscription fee
To be honest I'm fine with it in isolation, copyright is bullshit and the internet is a quasi-socialist utopia where information (an infinitely-copyable resource which thus has infinite supply and 0 value under capitalist economics) is free and humanity can collaborate as a species. The problem becomes that companies like Google are parasites that take and don't give back, or even make life actively worse for everyone else. The demand for compensation isn't so much because people deserve compensation for IP per se, it's an implicit understanding of the inherent unfairness of Google claiming ownership of other people's information while hoarding it and the wealth it generates with no compensation for the people who actually made that wealth. "If you're going to steal from us, at least pay us a fraction of the wealth like a normal capitalist".
If they made the models open source then it'd at least be debatable, though still suss since there's a huge push for companies to replace all cognitive labor with AI whether or not it's even ready for that (which itself is only a problem insofar as people need to work to live, professionally created media is art insofar as humans make it for a purpose but corporations only care about it as media/content so AI fits the bill perfectly). Corporations are artificial metaintelligences with misaligned terminal goals so this is a match made in superhell. There's a nonzero chance corporations might actually replace all human employees and even shareholders and just become their own version of skynet.
Really what I'm saying is we should eat the rich, burn down the googleplex, and take back the means of production.
I'm trying to network some spare laptop motherboards together with a working laptop that remotes in, my Desktop named Theseus, my AWS VPS, and a raspberry pi running Home Assistant to make a mini supercomputer to run AI locally. Unfortunately I have a bad habit of "painting the Mona Lisa pixel by pixel', so I've been stuck cutting holes, gluing things with ABS-acetone paste, and putting in heatset inserts (for maximum serviceability) for weeks to make cases for them while my ADHD ass keeps forgetting what I was doing and switching between tasks at random 🥲 One of them is done enough that I got the metal tape for grounding laid down so I only have maybe a few weeks left
The network's name is Navi, as an homage to Serial Experiments Lain!
Cults of personality tie your identity to the target, such that at some point they could literally shoot someone in the streets and it'll still be excused. To do otherwise would break your ego which most people aren't willing or prepared to do.