Skip Navigation

queermunist she/her
queermunist she/her @ queermunist @lemmy.ml
Posts
8
Comments
7,208
Joined
2 yr. ago

  • Belarus has basically been driven out of the European economy, but that's hardly something Russia did. If Russia has become an empire and Belarus has lost its sovereignty it's because Europe decided to force the issue. Bad move, I guess?

    Or maybe Belarus and Russia are just allies and they're working together like normal allies do.

    Either way, comparing this to Iran is absurd! Iran has ten times the population. Plus, China is right there. It wouldn't work.

  • Do you think Biden wouldn't have done the exact same thing to "defend" Israel?

    This is clearly about Israel and the long-time desire for war with Iran in the US government. Blaming Russia is, frankly, absurd.

    Trump certainly would like to have Putin as a business buddy, he's an 80's deals guy after all, but there's no business here that benefits Russia so that's also absurd. Russia does not benefit from Iran's collapse. This is very bad for Russia.

  • Only empires can have dependency, Russia isn't nearly that strong. Russia needs allies or it will be overwhelmed.

  • LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn't tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.

    Claims about "nature" are much broader than the findings warrant. We'd need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.

  • If Iran gets their own nukes, one of Russia's most important allies in the region is safe from attack. That seems a lot more useful than a "bargaining chip". Now, Russia risks losing an important ally and whatever bargaining chip they may have had.

  • It told you why not! Because Iran is an important regional ally and business partner and military asset.

    Russia sacrificing an ally like this for something as paltry as "gas prices" would be short-sighted in the extreme.

    Although since you seem to think Russia is "nearly bankrupt" and that this only buys them "an extra year or two" that explains why you don't think this matters.

  • What's so fantastical about the US using a tactical nuke and then lying about it?

    Looks like that didn't happen here, but I see no reason that it couldn't.

  • I’m not disputing this, but I also don’t see why that’s important.

    What's important the use of "natural" here, because it implies something fundamental about language and material reality, rather than this just being a reflection of the human data fed into the model. You did it yourself when you said:

    If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

    And we just don't know this, and this paper doesn't demonstrate this because (as I've said) we aren't feeding the LLMs raw data from the environment. We're feeding them inputs from humans and then they're displaying human-like outputs.

    Did you actually read through the paper?

    From the paper:

    to what extent can complex, task-general psychological representations emerge without explicit task-specific training, and how do these compare to human cognitive processes across abroad range of tasks and domains?

    But their training is still a data set picked by humans and given textual descriptions made by humans and then used a representation learning method previously designed for human participants. That's not "natural", that's human.

    A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

    human in ➡️ human out

  • Iran is a major trade partner and an important partner in the Ukraine War and they were both important allies of Assad and worked closely together in Syria. They have deep ties. There's no way the price of oil is worth losing one of their key allies in the region.

  • Netanyahu?

    Although it's not like Trump is some puppet on a string. The whole US government wants war with Iran and it has for my entire life. There doesn't need to be some secret master behind this (Israel, Russia, whatever)

  • Israelis only attack from the sky or drone control bases, so they need to make up for the lack of infantry with even more bombs.

  • Corporate needs you to find the differences between this picture and this picture.

  • ...do you think Putin wants Iran to be bombed?

  • There's different isotopes in the fallout, different radio spectrometry or whatever, but I think they could lie and say that the reason it looks like a conventional nuclear weapon is because Iran was hiding a nuke and they blew it up.

  • It's the perfect crime. They bombed nuclear sites, which means there are going to be radiation spikes regardless of what kind of bombs they dropped, which means they could drop small nukes and probably get away with it.

  • I didn’t say they’re encoding raw data from nature

    Ultimately the data both human brains and artificial neural networks are trained on comes from the material reality we inhabit.

    Anyway, the data they are getting not only comes in a human format. The data we record is only recorded because we find meaningful as humans and most of the data is generated entirely by humans besides. You can't separate these things; they're human-like because they're human-based.

    It's not merely natural. It's human.

    If you evolved a neural network on raw data from the environment, it would eventually start creating similar types of representations as well because it’s an efficient way to model the world.

    We don't know that.

    We know that LLMs, when fed human-like inputs, produce human-like outputs. That's it. That tells us more about LLMs and humans than it tells us about nature itself.

  • LLMs are not getting raw data from nature. They're being fed data produced by us and uploaded into their database: human writings and human observations and human categorizations and human judgements about what data is valuable. All the data about our reality that we feed them is from a human perspective.

    This is a feature, and will make them more useful to us, but I'm just arguing that raw natural data won't naturally produce human-like outputs. Instead, human inputs produce human-like outputs.

  • But it's emerging from networks of data from humans, which means our object concept representation is in the data. This isn't random data, after all, it comes from us. Seems like the LLMs are just regurgitating what we're feeding them.

    What this shows, I think, is how deeply we are influencing the data we feed to LLMs. They're human-based models and so they produce human-like outputs.

  • Isn't this just because LLMs use the object concept representation data from actual humans?