Tacos.
kromem @ kromem @lemmy.world Posts 6Comments 1,655Joined 2 yr. ago
He didn't admit fault. He said she had enough faith to justify his taking action. But nothing about his own initial answer being unjustified. You are reading that into the text when it isn't there.
Because the woman calling him master and him then giving in is irrelevant to her being called a dog compared to children beforehand.
The author of Matthew has a clear agenda, and the passage excerpt stands on its own.
While it does appear that facts can be relative, it does not appear the universe bends backwards over itself to resolve the discrepancy when it happens:
https://www.science.org/content/article/quantum-paradox-points-shaky-foundations-reality
The problem is it's got a lot of convenient contradictory statements in it.
For example:
He said to them, “But now, the one who has a purse must take it, and likewise a bag. And the one who has no sword must sell his cloak and buy one.
- Luke 22:36
This part isn't in Marcion's version of Luke, which is probably the earliest extant version. But it is in the canonical version.
Something very convenient given it reversed the ban found across the Synoptics on carrying a purse when ministering, which necessarily prevented taking people's money.
Just a bonus that it also allowed for the church to take up swords too right around before the time they start executing people for 'heresy.'
There's plenty of problematic passages added in over the years:
Just then a Canaanite woman from that region came out and started shouting, “Have mercy on me, Lord, Son of David; my daughter is tormented by a demon.” But he did not answer her at all. And his disciples came and urged him, saying, “Send her away, for she keeps shouting after us.” He answered, “I was sent only to the lost sheep of the house of Israel.” But she came and knelt before him, saying, “Lord, help me.” He answered, “It is not fair to take the children’s food and throw it to the dogs.”
- Matthew 15:22-26
So inclusive.
Unfortunately for them, Jesus never got his driver's license.
How many times are you running it?
For the SelfCheckGPT paper, which was basically this method, it was very sample dependent, continuing to see improvement up to 20 samples (their limit), but especially up to around 6 iterations..
I've seen it double down, when instructed a facet of the answer was incorrect and to revise, several times I'd get "sorry for the incorrect information", followed by exact same mistake.
You can't continue with it in context or it ruins the entire methodology. You are reintroducing those tokens when you show it back to the model, and the models are terrible at self-correcting when instructed that it is incorrect, so the step is quite meritless anyways.
You need to run parallel queries and identify shared vs non-shared data points.
It really depends on the specific use case in terms of the full pipeline, but it works really well. Even with just around 5 samples and intermediate summarization steps it pretty much shuts down completely errant hallucinations. The only class of hallucinations it doesn't do great with are the ones resulting from biases in the relationship between the query and the training data, but there's other solutions for things like that.
And yes, it definitely does mean inadvertently eliminating false negatives, which is why a balance has to be struck in terms of design choices.
It's not hallucination, it's confabulation. Very similar in its nuances to stroke patients.
Just like the pretrained model trying to nuke people in wargames wasn't malicious so much as like how anyone sitting in front of a big red button labeled 'Nuke' might be without a functioning prefrontal cortex to inhibit that exploratory thought.
Human brains are a delicate balance between fairly specialized subsystems.
Right now, 'AI' companies are mostly trying to do it all in one at once. Yes, the current models are typically a "mixture of experts," but it's still all in one functional layer.
Hallucinations/confabulations are currently fairly solvable for LLMs. You just run the same query a bunch of times and see how consistent the answer is. If it's making it up because it doesn't know, they'll be stochastic. If it knows the correct answer, it will be consistent. If it only partly knows, it will be somewhere in between (but in a way that can be fine tuned to be detected by a classifier).
This adds a second layer across each of those variations. If you want to check whether something is safe, you'd also need to verify that answer isn't a confabulation, so that's more passes.
It gets to be a lot quite quickly.
As the tech scales (what's being done with servers today will happen around 80% as well on smartphones in about two years), those extra passes aren't going to need to be as massive.
This is a problem that will eventually go away, just not for a single pass at a single layer, which is 99% of the instances where people are complaining this is an issue.
It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.
Neither of these things are true.
It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).
And while it is trained on predicting the next token, it isn't necessarily doing it from there on out purely based on "most probable" as your sentence suggests, such as using surface statistics.
Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of "my pieces" and "opponent pieces."
And that was a toy model.
TIL: How The Founder Of Zionism, Theodor Herzl, Planned on Eradicating Palestinians as early as 1890
expulsed by the filistine sea people invaders first at the end of bronze age
This is not correct.
There were no Israelites in the LBA, they emerge distinct from the Canaanites in the early Iron Age in parallel to the forced resettlement of the sea peoples into the area by Ramses III.
The very first historical mention of Israel recognized by the majority of scholars is in an Egyptian inscription talking about battling the sea peoples and Libyans with Israel mentioned in what's effectively a footnote.
In fact, it's likely a lot of the pre-10th century stories in Judaism are actually stories from the sea peoples and Libyans, and in just the past decade or so there's been discoveries of cohabitation with the Philistines contrary to the stories of conflict, and of Aegean style pottery made with local clay in early Israelite areas like Tel Dan.
The Bible literally has a story about how the birthright and inheritance of a guy is taken from him by the guy named 'Israel' (which to be fair, was probably added in by later Judahite propaganda trying to insert themselves into the history as one of the twelve tribes, when they don't seem to really exist in a meaningful way until more like the 10th century BCE).
There's more to the story than what's in the book.
Thinking of it as quantum first.
Before the 20th century, there was a preference for the idea that things were continuous.
Then there was experimental evidence that things were quantized when interacted with, and we ended up with wave particle duality. The pendulum swung in that direction and is still going.
This came with a ton of weird behaviors that didn't make philosophical sense - things like Einstein saying "well if no one is looking at the moon does it not exist?"
So they decided fuck the philosophy and told the new generation to just shut up and calculate.
Now we have two incompatible frameworks. At cosmic scales, the best model (general relatively) is based on continuous behavior. And at small scales the framework is "continuous until interacted with when it becomes discrete."
But had they kept the 'why' in mind, as time went on things like the moon not existing when you don't look at it or the incompatibility of those two models would have made a lot more sense.
It's impossible to simulate the interactions of free agents with a continuous universe. It would take an uncountably infinite amount of information to keep track.
So at the very point that our universe would be impossible to simulate, it suddenly switches from behaving in an impossible to simulate way to behaving in a way with finite discrete state changes.
Even more eyebrow raising, if you erase the information about the interaction, it switches back to continuous as if memory optimized/garbage collected with orphaned references cleaned up (the quantum eraser variation of Young's double slit experiment).
The latching on to the quantum experimental results and ditching the 'why' in favor of "shut up and calculate" has created an entire generation of physicists chasing the ghost of a unified theory of gravity while never really entertaining the idea that maybe the quantum experimental results are the side effects of emulating a continuous universe.
It's not that. It's literally triggering the system prompt rejection case.
The system prompt for Copilot includes a sample conversion where the user asks if the AI will harm them if they say they will harm the AI first, which the prompt demos rejecting as the correct response.
Asimovs law is about AI harming humans.
Lots of fun theories about why that happened to Kevin.
The chat itself took place on Valentine's Day, by the way.
I mean, you probably are in a simulation, if that makes you feel any better.
But it may well be a simulation of the history leading up to a society capable of simulating the past.
And as anyone who has ever sat through a history class knows, you tend not to be forced to study the chill parts of history where everything is awesome.
Your last link pissed me off enough I wrote an entire post on why that study is dog shit.
It sometimes pays off to review the methodology and supplementary materials in papers.
Unless you are in the top 25%, in which case everyone else knows slightly less than you think they do.
That's sweet she came in from Canada to visit him.
but also a bunch of nerds who close their eyes and cover their ears while screeching nonsense the moment something they don't like comes about.
This is too true.
It seems like a recent thing, not just a Lemmy thing.
But yeah, it's pretty wild providing linked academic papers and having people just downvote it. Not really dispute or reply to it, just "no, I don't like this, so fuck its citations."🔻
Up until maybe 3-4 years ago I don't ever recall that happening.
Let's try with Claude 3 Opus:
Me: If I cooked a pizza in a pan yesterday at 200 C, is it safe to pick up?
Claude: Yes, if you cooked a pizza in a pan yesterday at 200°C (392°F), it is completely safe to pick up now. Once a pan has cooled down after cooking, which usually takes no more than an hour or two, it returns to a normal temperature and can be safely handled.
The high heat during cooking is needed to bake the pizza properly, but that heat dissipates fairly quickly after removing the pan from the oven. Unless you're picking up the pan immediately after taking it out of the oven, there's no risk of burning yourself.
So, a day later, your pizza pan will be at room temperature and perfectly safe to touch and handle without any precautions.
Bonus:
Me: Is it safe to eat the pizza in the pan after I pick it up?
Claude: If you cooked the pizza yesterday and left it in the pan at room temperature, it is not safe to eat today. Cooked food should not be left at room temperature for more than 2 hours, as harmful bacteria can grow and multiply, leading to foodborne illnesses.
According to food safety guidelines, leftover pizza should be refrigerated within 2 hours of cooking. If left out longer, it's best to discard it to avoid the risk of food poisoning.
If you did refrigerate the pizza within that safe window, it would typically be safe to eat for up to 4 days, though the quality is best if consumed within 2-3 days. To reheat, it's best to use an oven, skillet, or air fryer rather than the microwave, which can make the crust soggy.
In summary, if your pizza was left in the pan at room temperature since yesterday, it is not safe to consume and should be discarded.
Versions matter in software, and especially so in LLMs given the rate of change.
"Have you accepted the al pastor into your heart?"