You die and go to Heaven and Hitler is there. In fact, everyone who ever lived and died is there. Are you offended?
kromem @ kromem @lemmy.world Posts 6Comments 1,656Joined 2 yr. ago
Other than the whole part where Solomon gives a litmus test to tell which parent is the real one and which is the false claimant and the false one was the one that cared about recognition even if the child died and the true parent was the one that cared more about the child living as its complete self even if that means being unknown to the child at all.
But yeah, all the parts surrounding it about the claimed divine parent who needs to be recognized or else his alleged children will suffer eternally do kind of conflict with both the wisdom of Solomon and OP's hypothetical.
No? Why the hell would I be offended?
Besides, chances are that were I born with Hitler's brain and environmental circumstances I'd likely have been a racist megalomaniac too.
Should the determination of whether we eternally suffer or have joy be the lottery of our births, biologies, and surroundings?
I was just thinking today how it'd be great if right before the first debate they gave them both cognitive tests from an independent third party evaluator.
My suspicion would be that both would have deficits but Trump's would end up measuring worse. Not a great result for the country, but would hopefully shut up the degree to which it's brought up regarding Biden.
Wait - you can peel them!?!
Not like there's a parallel market in cell phone contracts before and after T-Mobile's branding as an 'uncarrier' to showcase that this is complete bullshit or anything...
Ah yes, the age old "if you are facing felonies, threaten to blackmail the US President trick."
I'm sure that will work.
Most of OpenAI as a company right now looks like someone who accidentally found themselves on a raging bull and is desperately trying to hold onto it.
It's beyond disappointing to see the leading AI company tripping over itself to cater to 'chatbot' usecases for their tech over everything else.
Possibly, but you'd be surprised at how often things like this are overlooked.
For example, another oversight that comes to mind was a study evaluating self-correction that was structuring their prompts as "you previously said X, what if anything was wrong about it?"
There's two issues with that. One, they were using a chat/instruct model so it's going to try to find something wrong if you say "what's wrong" and it should have instead been phrased neutrally as "grade this statement."
Second - if the training data largely includes social media, just how often do you see people on social media self-correct vs correct someone else? They should have instead presented the initial answer as if generated from elsewhere, so the actual total prompt should have been more like "Grade the following statement on accuracy and explain your grade: X"
A lot of research just treats models as static offerings and doesn't thoroughly consider the training data both at a pretrained layer and in their fine tuning.
So while I agree that they probably found the result they were looking for to get headlines, I am skeptical that they would have stumbled on what that should have been attempting to improve the value of their research (include direct comparison of two identical pretrained Llama 2 models given different in context identities) even if they had been more pure intentioned.
We write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.
Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.
So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?
There's a lot of poor study design with LLMs right now. We wouldn't have expected Gutenburg to predict the Protestant reformation or to be an expert in German literature - similarly, the ML researchers who may legitimately understand the training and development of LLMs don't necessarily have a good grasp on the breadth of information encoded in the training data or the implications on broader sociopolitical impacts, and this becomes very evident as they broaden the scope of their research papers outside LLM design itself.
The recommendations for health is 5,000 to 10,000 steps per day, which is $1,250 to $2,500 per day.
While the first is better on the case of unexpected disability, the second is probably going to lead to a better life overall.
I'll have a good one on Easter.
It's a bit of a tradition at this point, and while the previous ones were on Reddit, I'm no longer on Reddit and so they'll be coming to Lemmy. I was planning on posting only in the !simulationtheory@lemmy.world community, but if you want it cross posted somewhere else I'm open to it as long as allowed by the community rules.
The previous ones so you know if it's up your alley:
Yeah, it says that we write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.
Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.
So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?
The important part of the research was that all the models had gone through 'safety' training.
That means among other things they were fine tuned to identify themselves as LLMs.
Gee - I wonder if the training data included tropes of AI launching nukes or acting unpredictably in wargames...
They really should have included evaluations of models that didn't have a specific identification or were trained to identify as human in the mix of they wanted to evaluate the underlying technology and not the specific modeled relationships between the concept of AI and the concept of strategy in wargames.
If he quits today, the stock likely tanks.
If he quit today I'd immediately buy the stock.
He's the sole reason I don't own it and never have.
They have some of the best engineering talent among the big players, but he's kind of terrible at product vision and historically supposed to be a micromanager.
As an example, I recall discussing a dislike button with an employee a decade ago. No public indicator, just telling the algorithm "I don't like this" to balance the public "I like this". Allegedly, that was infamously a no-go from the very top.
Fun fact: when training an AI on data, the gold standard is to have both positive and negative reinforcement metadata.
I've often wondered what it would have been like to be a fly on the wall when his ML engineers were explaining to him just how valuable it would have been to have over a decade of both signals from billions of people when they were shifting gears into competing with OpenAI/Google on AI.
No, the idea is that a stock that previously never provided a dividend now suddenly is.
This means that now their stock may meet criteria for certain types of institutional investment holdings that look for or require dividends.
At this point, a lot of the stock market is traded less on "the market thinks this company will do well" and more "my algorithm thinks other algorithms will value this stock in a given way."
For example, IIRC the majority of trade volume is after hours, meaning that while the markets are closed to main street investors is when the value of companies is really being decided between brokerages and institutional investors.
While not entirely a rigged game because companies actually doing well does tend to result in net price increases, a significant portion of the money to be made in the stock market is unequivocally rigged.
When this current bubble pops, and it will, the tech can be developed to its full potential after that. Right now, the market is 99% snakeoil.
It depends on what bubble one's referring to. The tech itself isn't going to 'pop' - in many cases the capabilities will probably outpace the current promises given the compounding rates of improvement. This isn't like past tech buzz cycles which is part of why there's a lot of questionable predictions regarding it.
Yeah, snake oil bottom feeders will gravitate to any buzz they can attach themselves to. But the barnacles don't steer the ship. The market of snake oil will dry up as it always does, but that's largely because their primary industry is selling snake oil, not whatever they change the label to.
The reality remains that Meta, Google and Microsoft will not profit if the world's problems are solved.
Not really. Microsoft stands to make a killing just running these models on Azure as their sole line of business if AI ends up as successful as it may prove to be. Google divesting more from ads might prove to make them less evil. Meta would be evil in this space if not for the fact that because they started off late they're the biggest driver of open AI development right now and arguably the biggest funder of any hope of counter-corporate AI existing.
It's easy to regard companies as monoliths. And while it's generally true that a corporation, especially large public companies, will end up trying to optimize around short term gains even at the cost of long term consequences or social evils, it isn't necessarily true that public good is always at odds with capitalist self optimization in all things. So it would still be a win for Microsoft if AI allowed for the public good as long as they could ensure that AI was running on their servers and they could attempt to maximize their margins as much as the market allowed for before net gains decrease. And any corporation smart enough to focus on longer term gains is going to be one that's going to actively try to avoid excessive public harm as your longer term revenues aren't going to go up if your customers die or go homeless, etc.
Also, the researchers themselves have certain aims and if their parent company doesn't align with those aims, they may take themselves and their significant value away from that company. For example, Meta was only suddenly "open AI friendly" and then a major player after literally half their AI team quit for greener pastures.
Unfortunately, the risks are just as massive. Everyone just seems to be blinded by the "new shiny" and refuses to see any negatives.......
While weighing risks, it's also important to weigh opportunity costs.
Also, I'm not sure who you are interfacing with, but in my experience it definitely seems like the majority of people are fairly bearish regarding AI (there's a number of reasons why I think that's the case, but it's still a significant majority). These days positivity regarding AI that isn't in the context of a snake oil sales channel is a rarity in most public discussions.
Kanye interrupted her acceptance speech with some crazy shit.
From there on, it was just maintaining the momentum.
It really is a dumb idea. It kind of made sense for its original intention of creating a place of punishment for the old gods of conquered peoples (Tartarus) - eternal punishment for eternal beings and all that. But then when it gets syncretized into Judaism's monotheism around the time Christianity is getting on its feet the whole concept is just silly.
It's pretty wild that Enoch didn't end up canonized given the influence it had on early Christianity. That shit makes Revelations look tame.