"No conclusion whatsoever" is basically the scientific consensus on whether Dvorak has any effect on efficiency or typing speed. It's hard to get good data because it's hard to isolate other factors and a lot of the studies on it are full of bias or have really small sample sizes (or both).
To anyone thinking of learning Dvorak, my advice is don't. It takes ages to get good at, isn't THAT much better and causes a lot of little annoyances when random programs decide to ignore your layout settings or you sit down at someone else's computer and start touch typing in the wrong layout from muscle memory or games tell you to press "E" when they mean "." or they do say "." but it's so small that you don't know if it's a dot or a comma and then you hit the wrong one and your guy runs forward and you die...
That said, I'm also a Dvorak user and it is very comfortable and satisfying and better than qwerty. Just not enough to be worth all the pain of switching.
If a worker co-op based society erased it's competition and formed a monopoly co-op run for the benefit of workers, is that not just a communist managed economy at that point with the monopoly playing the role of the state before erasing itself?
Since when is India not a major player? Last I checked they were the world's 4th biggest economy, have almost 20% of the population of the planet (more than four USes combined), 4th largest military spend and have nearly 200 nukes.
Not to say that it would be part of a world war but it sounds weird to say that they're not a heavyweight but Russia is, despite having double Russia's economic output.
Mr. Hypothetical lord high executive oligarch can take his private jet to Canada and lounge around on the company card with the money from his US company's car sales, or find an excuse to convert some of it to USD for some reason, or use it to buy up more Canadian companies to expand their power, or a million other things.
However, I think the point of the boycott is more about making the tariffs hurt the US economy by messing with their ability to export as well as import. Making the trade numbers look bad is likely to put more pressure on the US to end the trade war. It's not so much about hurting the capitalists that operate in Canada (a worthy goal in itself but not what people are specifically trying to do in this instance since it won't really affect those trade numbers).
I think the point they're making is that the majority of the money they make in Canada, they spend in Canada. They pay Canadian taxes and Canadian staff, using Canadian banks, etc.
Just because their headquarters are in the US doesn't necessarily mean they're sending vast sums of money across the border, that would be expensive. The American-based company makes money, but not necessarily in America, they're multi-national and their money is kept all over the world.
As opposed to a company that exports their products, in which case the money is paid to the American company in America with American staff etc.
I don't have any numbers or sources to back this up though. Just outlining what I think the other commenter was implying.
If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.
Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"
It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
I can't vouch for the veracity of any of these since it's not really my field but it's interesting to see how how stuff like this has shifted over time and where the parallels to modern racism and xenophobia are.
Even relatively recently, Italians weren't really considered "white", especially by Americans. The KKK considered them "coloured" people with their olive skin and dangerous Catholicism. There was a big wave of "italiapobia" in the late 19th/early 20th century.
The governer of Louisiana in 1911 described Italians as "just a little worse than the Negro, being if anything filthier in their habits, lawless, and treacherous".
People can be pretty terrible when it comes to race and ethnicity.
It's really not. Just because they describe their algorithm in computer science terms in the paper, doesn't mean it's theoretical. Their elastic and funnel examples are very clear and pretty simple and can be implemented in any language you like..
It's not a lot of code to make a hash table, it's a common first year computer science topic.
What's interesting about this isn't that it's a complex theoretical thing, it's that it's a simple undergrad topic that everybody thought was optimised to a point where it couldn't be improved.
One thing you gotta remember when dealing with that kind of situation is that Claude and Chat etc. are often misaligned with what your goals are.
They aren't really chat bots, they're just pretending to be. LLMs are fundamentally completion engines. So it's not really a chat with an ai that can help solve your problem, instead, the LLM is given the equivalent of "here is a chat log between a helpful ai assistant and a user. What do you think the assistant would say next?"
That means that context is everything and if you tell the ai that it's wrong, it might correct itself the first couple of times but, after a few mistakes, the most likely response will be another wrong answer that needs another correction. Not because the ai doesn't know the correct answer or how to write good code, but because it's completing a chat log between a user and a foolish ai that makes mistakes.
It's easy to get into a degenerate state where the code gets progressively dumber as the conversation goes on. The best solution is to rewrite the assistant's answers directly but chat doesn't let you do that for safety reasons. It's too easy to jailbreak if you can control the full context.
The next best thing is to kill the context and ask about the same thing again in a fresh one. When the ai gets it right, praise it and tell it that it's an excellent professional programmer that is doing a great job. It'll then be more likely to give correct answers because now it's completing a conversation with a pro.
There's a kind of weird art to prompt engineering because open ai and the like have sunk billions of dollars into trying to make them act as much like a "helpful ai assistant" as they can. So sometimes you have to sorta lean into that to get the best results.
It's really easy to get tricked into treating like a normal conversation with a person when it's actually really... not normal.
This is a really interesting cultural one that always kinda surprises me.
Where I am, cooking has always been a very masculine thing. Cutting up meat with sharp knives, setting things on fire, etc. The chef industry here is very male dominated and men cook together as a social thing when hanging out. In most families I encounter, the dad does most of the cooking with the exception of maybe baking? It's weird to hear that it would ever be thought of as insufficiently masculine.
In fact I think it would be seen as maybe a bit embarrassing/weak if you were a man who couldn't cook.
I feel like this a cultural thing because that sounds wild to me.
The penalty for burglary where I am is not death, nor am I a judge or executioner.
We've been broken into a lot and it's usually just some poor asshole who wants to steal things to buy meth. It's horrible and scary and feels like a massive violation but shooting someone in that scenario just feels like straight up murder.
Not American, or really knowledgeable about it but from the outside, I think this looks like ordinary politicking.
IVF is a proxy war for abortion. Dems want the talking point that abortion bans hurt/block IVF. Republicans/Trump want to remove that talking point by saying they love IVF "we want more babies right?" and will support laws to protect it as a separate and unrelated issue to abortion.
Dems put forward a bill that not only protects it but makes insurance companies pay for it. Trump is fine with that because it benefits him but Republicans in Congress get big money from insurance lobbyists and so they can't vote for it. They also have fears that they'll piss off their homophobic supporters by making them pay for something the gays might use (insurance costs will go up to help someone who isn't me!").
Republicans put forward another bill that protects IVF without hurting their insurance company buddies but the Dems block it. Republicans then have to vote against the IVF bill and the Dems can now say "see! They really don't care about reproductive rights at all!"
Feels a bit like nobody involved actually cares about IVF at all and just wants votes and lobbyist money.
In case this take comes across too centrist: Republicans and Trump are really quite shit.
Yep, and even when talking about living things it's not a clear distinction.
In biology, poison is a substance that causes harm when an organism is exposed to it.
Venom is a poison that enters the body through a sting or bite.
In a bunch of medical fields though, poisons only apply to toxins that are ingested or absorbed through the skin and that definition sometimes carries across to zoology.
Venomous creatures are poisonous by most definitions because venom is a poison. But if the distinction is useful in a medical or zoological context then they're not.
tldr: The pedantry of eg. correcting someone who says a snake is poisonous is totally pointless and mostly wrong.
That's impressive! It took me way longer to learn. Maybe a month or two? Even longer to feel really comfortable with it.