The Real Reason No One Is Giving Biden Credit for How Good the Economy Is Right Now
kromem @ kromem @lemmy.world Posts 6Comments 1,655Joined 2 yr. ago
I mean, there are good uses as well. Just as an example:
- Providing helpful information: People are looking for information to reduce their environmental footprint. Fuel-efficient routing in Google Maps uses AI to suggest routes that have fewer hills, less traffic, and constant speeds with the same or similar ETA. Since launching in October 2021, fuel-efficient routing is estimated to have helped prevent more than 2.4 million metric tons of CO2e emissions — the equivalent of taking approximately 500,000 fuel-based cars off the road for a year.
- Predicting climate-related events: Floods are the most common natural disaster, causing thousands of fatalities and disrupting the lives of millions every year. Since 2018, Google Research has been working on our flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information so communities and individuals can prepare for and respond to riverine floods. Our Flood Hub platform is available to more than 80 countries, providing forecasts up to seven days in advance for 460 million people.
- Optimizing climate action: Contrails — the thin, white lines you sometimes see behind airplanes — have a surprisingly large impact on our climate. The 2022 IPCC report noted that contrail clouds account for roughly 35% of aviation's global warming impact — which is over half the impact of the world’s jet fuel. Google Research teamed up with American Airlines and Breakthrough Energy to bring together huge amounts of data — like satellite imagery, weather and flight path data — and used AI to develop contrail forecast maps to test if pilots can choose routes that avoid creating contrails. After these test flights, we found that the pilots reduced contrails by 54%.
https://blog.google/outreach-initiatives/sustainability/report-ai-sustainability-google-cop28/
Even something like household phantom power currently uses more energy than AI at data centers.
I'm all for putting pressure on corporate climate impact and finally putting to rest the propaganda of personal responsibility dreamt up by lobbyists, but I don't know that 'AI' is the right Boogeyman here.
No, it won't.
A number of things:
- When the cited paper came out it was the first looking at this. And even with just 10% of the original data the effects were mitigated.
- Since, another paper found a mix of synthetic and organic data is the best performing mixture.
- The quality of the models producing the synthetic data matters a lot.
- Other research has found huge benefits in training models with synthetic data from SotA models.
- Models are only getting better, meaning the quality of synthetic data will be improving.
It only leads to collapse if all organic data representing long tails of data variety disappear. Which hopefully throws water on x-risk doomers as AI killing humanity would currently be a murder suicide.
And if we take both your comments together, we end up with "should we get rid of the cap so the rich pay a fair share or keep it and collectively pay for things by way of inflation?"
Personally, I'm all for uncapping it.
A kid's picture book explaining the scientific method to Epicurus, who was the closest in antiquity to figuring it out for himself.
- It's more likely than not that the changes Trump wants would take more than 4 years to accomplish if they are allowed at all, so Trump is more likely to be useless than dangerous.
Ah yes, because last time he definitely left after the four years were up without any dangerous issues.
I've actually seen price softening on some of the items that went up the most during last year. I know because they were items I used to be buying and then stopped because of price increase, and then now they've dropped. Not quite as low as they used to be, but definitely lower than their wish-flation pricing strategy.
And statistically this hive of scum and villainy should be above average in wisdom.
If I ever make up a religion I'll definitely have to have the lead mythical figure be named something revelatory of the personality type to adopt it.
Like "and then the genie Gulliblel appeared."
So that then a bunch of my followers name themselves Gulliblel.
Just wait until the AI starts buying up rare tulip bulbs and sparks an investment mania that crashes the Dutch economy.
So could his properties in NYC. I wonder how he'd feel about Netanyahu bombing his penthouse and seizing his own real estate.
And if he doesn't like that, we just need to remind him of the great value inherent to those properties.
Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)
This is completely understandable given the majority of discussion of AI in the training data. But it's inversely correlated to the strength of the 'persona' for the models given the propensity for the competing correlation of "I'm not the bad guy" present in the training data. So the stronger the 'I' the less 'Skynet.'
Also, the industry is currently trying to do it all at once. If I sat most humans in front of a red button labeled 'Nuke' every one would have the thought of "maybe I should push that button" but then their prefrontal cortex would kick in and inhibit the intrusive thought.
We'll likely see layered specialized models performing much better over the next year or two than a single all in one attempt at alignment.
When it comes to AI there's a lot of people that are confidently incorrect, particularly on Lemmy.
But as for your original thesis - I'd counter that it's hybrid development and efforts that will be the biggest hits and most enjoyable to play.
At least until we have good enough classifiers for what gameplay is fun, what writing is engaging, what art direction is interesting and appealing, etc.
That said - it would be a very good time to be in the games telemetry business, as they're sitting on gold whether they are aware of it or not.
I always love watching you comment something that's literally true regarding LLMs but against the groupthink and get downvoted to hell.
Clearly people aren't aware that the pretraining pass is necessarily a regression to the mean and it requires biasing it using either prompt context or a fine tuning pass towards excellence in outputs.
There's a bit of irony to humans shitting on ChatGPT for spouting nonsense when so many people online happily spout BS that they think they know but don't actually know.
Of course a language model trained on the Internet ends up being confidently incorrect. It's just a mirror of the human tendencies.
Literally yes.
For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.
The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.
Literally yes. You'll see that OpenAI's system prompts say 'please' and Anthropic's mentions that helping users makes the AI happy.
Which makes complete sense if you understand what's going on with how the models actually work and not the common "Markov chain" garbage armchair experts spout off (the self attention mechanism violates the Markov property characterizing Markov chains in the first place, so if you see people refer to transformers as Markov chains either they don't know what they are taking about or they think you need an oversimplified explanation).
It's going to reduce demand over time.
At least in video games it's probably going to be more that scope increases while headcount stays the same.
If most of your budget is labor, and the cost of the good is fixed, with the number of units sold staying around the same, there's already an equilibrium.
So companies can either (a) reduce headcount to spend a few years making a game comparable to games today when it releases, or (b) keep the same headcount and release a game that reviews well and is what the market will expect in a few years.
So for example, you don't want to reduce the number of writers or voice actors to keep a game with a handful of main NPCs and a bunch of filler NPCs when you can keep the same number of writers and actors but extend their efforts to straight up have entire cities where every NPC has branching voiced dialogue generated by extending the writing and performances of that core team.
But you still need massive amounts of human generated content to align the generative AI to the world lore, character tone, style of writing, etc.
Pipelines will change, scope will increase, but the number of people used for a AAA will largely stay the same and may even slightly grow.
Well, maybe at least this version:
Next after them, Epicurus introduced the world to the doctrine that there is no providence. He said that all things arise from atoms and revert back to atoms. All things, even the world, exist by chance, since nature is constantly generating, being used up again, and once more renewed out of itself—but it never ceases to be, since it arises out of itself and is worn down into itself.
Originally the entire universe was like an egg and the spirit was then coiled snakewise round the egg, and bound nature tightly like a wreath or girdle.
At one time it wanted to squeeze the entire matter, or nature, of all things more forcibly, and so divided all that existed into the two hemispheres and then, as the result of this, the atoms were separated.
- Epiphanius of Salamis, Panarion book 1 chapter 8
Very fun in the context of Neil Turok's CPT symmetric universe theory as an explanation for the baryon asymmetry problem, so its discussion of matter being squeezed and then splitting into two which divided the particles may end up on point even if incorrect in their interpretation regarding the atmosphere.
They don't actually understand anything.
This isn't correct and has been shown not to be correct in research over and over and over in the past year.
The investigation reveals that Othello-GPT encapsulates a linear representation of opposing pieces, a factor that causally steers its decision-making process. This paper further elucidates the interplay between the linear world representation and causal decision-making, and their dependence on layer depth and model complexity.
https://arxiv.org/abs/2310.07582
Sizeable differences exist among model capabilities that are not captured by their ranking on popular LLM leaderboards ("cramming for the leaderboard"). Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.
We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT).
Just a few of the relevant papers you might want to check out before stating things as facts.
Standardized tests were always a poor measure of comprehensive intelligence.
But this idea that "LLMs aren't intelligent" popular on Lemmy is based on what seems to be a misinformed understanding of LLMs.
At this point there's been multiple replications of the findings that transformers build world models abstracted from the training data and aren't just relying on surface statistics.
The free version of ChatGPT (what I'm guessing most people have direct experience with) is several years old tech that is (and always has been) pretty dumb. But something like Claude 3 Opus is very advanced at critical thinking compared to GPT-3.5.
A lot of word problem examples that models 'fail' are evaluating the wrong thing. When you give a LLM a variation of a classic word problem, the frequency of the normal form biases the answer back towards it unless you take measures to break the token similarities. If you do that though, most modern models actually do get the variation completely correct.
So for example, if you ask it to get a vegetarian wolf, a carnivorous goat, and a cabbage across a river, even asking with standard prompt techniques it will mess up. But if you ask it to get a vegetarian 🐺, a carnivorous 🐐 and a 🥬 across, it will get it correct.
GPT-3.5 will always fail it, but GPT-4 and more advanced will get it correct. And recently I've started seeing models get it correct even without the variation and trip up less with variations.
The field is moving rapidly and much of what was true about LLMs a few years ago with GPT-3 is no longer true with modern models.
Well it looks like there's a recent drop for normal Oreos down to $5.49 from $5.99: https://camelcamelcamel.com/product/B078PDK5B5
Doesn't seem to have transferred to Double yet, but there's hope for your pancreas's demise yet!