Yeah, so give Palestinians right of return. There's plenty of land there to share. Make Israelis pay reparations.
Like how is the land I'm living on not functionally stolen? My entire country used to be Indigenous land, and it became as it is because it was "allowed" (forced).
The Jewish children who were born there aren't colonists, and they shouldn't have to suffer for their parents' colonial acts.
The main imperative is to end the ongoing colonialism and genocide.
I'm firmly against the Palestinian genocide, but, isn't this a little ridiculous? Shouldn't we just focus on ending the killings and stopping further colonization?
Like, as a 4th generation Canadian, do I need to go back to Ireland to make up for my great great grandparent's generation's gross history of colonial genocide?
You don't need to evict Jewish people to end Zionism.
Appreciate the correction. Happen to know of any whitepapers or articles I could read on it?
Here's the thing, I went out of my way to say I don't know shit from bananas in this context, and I could very well be wrong. But the article certainly doesn't sufficiently demonstrate why it's right.
Most technical articles I click on go through step by step processes to show how they gained understanding of the subject material, and it's layed out in a manner that less technical people can still follow. And the payoff is you come out with a feeling that you understand a little bit more than what you went in with.
This article is just full on "trust me bro". I went in with a mediocre understanding, and came out about the same, but with a nasty taste in my mouth. Nothing of value was learned.
I'll preface this by saying I'm not an expert, and I don't like to speak authoritatively on things that I'm not an expert in, so it's possible I'm mistaken. Also I've had a drink or two, so that's not helping, but here we go anyways.
In the article, the author quips on a tweet where they seem to fundamentally misunderstand how LLMs work:
I tabbed over to another tab, and the top post on my Bluesky feed was something along these lines:
ChatGPT is not a search engine. It does not scan the web for information. You cannot use it as a search engine. LLMs only generate statistically likely sentences.
The thing is… ChatGPT was over there, in the other tab, searching the web. And the answer I got was pretty good.
The tweet is correct. The LLM has a snapshot understanding of the internet based on its training data. It's not what we would generally consider a true index based search.
Training LLMs is a costly and time consuming process, so it's fundamentally impossible to regenerate an LLM in the same order of magnitude of time it takes to make a simple index.
The author fails to address any of these issues, which suggests to me that they don't know what they're talking about.
I suppose I could conceded that an LLM can fulfill a similar role that a search engine traditionally has, but it'd kinda be like saying that a toaster is an oven. They're both confined boxes which heat food, but good luck if you try to bake 2 pies at once in a toaster.
God, that was a bad read. Not only is this person woefully misinformed, they're complaining about the state of discourse while directly contributing to the problem.
If you're going to write about tech, at least take some time to have a pasaable understanding of it, not just "I use the product for shits and giggles occasionally."
I have a hard time considering something that has an immutable state as sentient, but since there's no real definition of sentience, that's a personal decision.
Technical challenges aside, there's no explicit reason that LLMs can't do self-reinforcement of their own models.
I think animal brains are also "fairly" deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there's a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don't really have either.
I guess it'd be possible to forward feed an "emotional state" as part of the LLM's context to emulate that sort of animal brain behaviour.
Not them, but static in this context means it doesn't have the ability to update its own model on the fly. If you want a model to learn something new, it has to be retrained.
By contrast, an animal brain is dynamic because it reinforces neural pathways that get used more.
Just a quick clarification:
/*/*/*
is not a relative path. The first/
references the root directory.