Skip Navigation

Posts
13
Comments
971
Joined
2 yr. ago

  • fuck him, but also this is blatant PR strategy.

    "Oh we've gotten rid of the man at the top, see we're better now! No no, nothing institutional has or will change"

  • To be even handed you could point to science and industry doing exactly the same shit when given a freer hand. Whether it's basically all industrial "accidents" in various and sundry capitalist economies (scare quotes because overriding safety protocols or not doing due dilligence in hiring because money isn't an accident it's social murder), stuff like that super secret American bomb sight being a scam, industrial agriculture everywhere fucking the soil for short term profits, oil spills, the whole climate crisis and burying of it for money etc.

    It's sort of difficult to get in the blame game because everything is multifaceted and regardless of whatever theoretical construct presides over everything humans have the same incentives for corruption at lots of levels.

    you also need to keep in mind the USSR was huge, completely and utterly fucked up by ww2 after inheriting a Russia completely and utterly fucked by centuries of tyranny and ww1. In a huge empire, rapidly industrialising, that just had millions of people killed and tens of thousands of towns destroyed there are going to be problems. No system is perfect enough to just overrule the material and social damage done during that.

    TBH my own take is they did pretty well but centralised management everywhere has a tendency to fuck everything up. Oh to be clear, anything with a single leader is highly centralised. Like tech startups are centralised, most research labs are under the iron grip of a PI and thus centralised (and talk to anyone below tenure on the academic track to learn of the problems there).

  • idk if our gov can claim any moral highground with the absolutely ludicrous claim they assert for "research".

    It seems a bit paranoid in general, who am I to judge though. Still it's deeply funny to me that our greatest fear is someone doing to us what the British empire already did, while also denying how evil that empire was.

  • Looks like they might have some molts left so maybe they'll heal. I'm in the blue mountains, idk why not many spiders this year. Normally they're everywhere. Not too many wasps actually but quite a few of the spider hunting ones, so maybe that's the lack. I've seen those wasps take on trapdoors and win!

    I've been deboomering this lot of land and it's been rewarding seeing life return as all the poisoned crap and ecologically unsuitable plants are replaced.

  • It's the moisture I reckon, got a lot of crickets in the grass atm and I think it's because they don't have to hide during the day to avoid dehydration.

    Fewer spiders but, last year was trapdoor and wolf open season. This year I'm only seeing black house and my loyal banded huntsman colony.

  • Buddy if the Christian conception of an afterlife is real I'm either gonna be chillin in the shade of a tree after a hard day's work or whatever, or standing hand in hand around some multi sword mouthed geometric acid trip of a god saying "holy" on repeat in a trip that would put eating all the LSD in history to shame.

    Either way who cares.

  • It's not brave or noble to police expressions of anger at oppressors it's just foolish. It won't make society better, civility is just a tool the powerful use to enforce the status quo.

    He chooses to let someone starve over feeding them, this is fine a civil behavior. I call him a fucking cunt for doing that and wish death upon him and this is uncivil behaviour and I lack compassion? nonsense, it is compassion that fills me with rage.

  • I thought they was saying they didn't mean llms will aid science not that llms wasn't the topic. Ambiguous in reread.

    AI isn't well defined which is what I was highlighting with mentions of computer vision etc, that falls into AI and it isn't really meaningfully different from other diagnostic tools. If people mean agi then they should say that, but it hasn't even been established it's likely possible let alone that we're close.

    There are already many other intelligences on the planet and not many are very useful outside of niches. Even if we make a general intelligence it's entirely possible we won't be able to surpass fish level let alone human for example. and even then it's not clear that intelligence is the primary barrier in anything, which was what I was trying to point out in my science held back post.

    There are so many ifs AGI is a Venus is cloudy -> dinosaurs discussion, you can project anything you like on it but it's all just fantasy.

  • He literally chooses every single day to actively ignore the plight of the people he claims to rule. He does nothing

    Every single day he actively chooses to ignore suffering, probably even cracks jokes about it, while actively working to preserve his enormous privilege and protect his kin from facing consequences for their heinous actions. He stands at the head of an institution of violence, racism, cruelty, and exploitation and every single moment of his life he chooses to side with that over any earnest attempt to redeem their reputation.

    The entire justification for their privilege is so insane it makes phrenology look respectable.

    He is horrible, he could help so many people with a few words and the equivalent of pocket change but he chooses not to for fear of starting a process that ends with him living as one of the ordinary citizens he claims to protect. It's fucking bananas that you think there is some moral reasons to extend civility to such a monsterous person.

  • This seems like splitting hairs agi doesn't exist so that can't be what they mean. AI applies to everything from pathing algorithms for library robots to computer vision and none of those seem to apply.

    The context of this post is LLMs and their applications

  • Do you think they have any compassion for the random proles like us? Like they could divest themselves of wealth to help people, and just live luxurious lives instead of gold flakes in food lives.

    I think any human being with enough compassion to deserve treatment as anything more than a hostile enemy would do that

  • If I ever have a golden piano while someone on the planet goes cold and hungry at night you can draw and quarter me.

  • They uh, still do the same thing fundamentally

    Altman isn't gonna let you blow him dude

  • Fair enough, I used to be scientist (a very bad one that never amounted to anything) and my perspective has been that the major barriers to progress are:

    • We've just got all the low hangingfruit
    • Science education isn't available to many people, perspectives are quite limited consequently.
    • power structures are exploitative and ossified, driving away many people
    • industry has too much influence, there isn't much appetite to fund blue sky projects without obvious short term money earning applications
    • patents slow progress
    • publish or perish incentivises excessive volumes of publication, fraud, and splitting discoveries into multiple papers which increases burden on researchers to stay current
    • nobody wants to pay scientists, bright people end up elsewhere
  • Is it? This seems like a big citation needed moment.

    Have LLMs been used to make big strides? I know some trials are going on aiding doctors in diagnosis and stuff but computer vision algorithms have been doing that for ages (shit contrast dyes, pcr, and blood analysis also do that really) but they come with their own risks and we haven't seen like widespread unknown illnesses being discovered or anything. Is the tech actually doing anything useful atm or is it all still hype?

    We've had algorithms help find new drugs and stuff, or plot out synthetic routes for novel compounds; We can run DFT simulations to help determine if we should try make a material. These things have been helpful but not revolutionary, I'm not sure why LLMs would be? I actually worry they'll hamper scientific progress by aiding fraud (unreproducible results are already a fucking massive problem) or extremely convincingly lying or omitting something if trying to use one to help in a literature review.

    Why do you think LLMs will revolutionise science?

  • I think it's really important to keep in mind the separation between doing a task and producing something which looks like the output of a task when talking about these things. The reason being that their output is tremendously convincing regardless of its accuracy, and given that writing text is something we only see human minds do it's so easy to ascribe intent behind the emission of the model that we have no reason to believe is there.

    Amazingly it turns out that often merely producing something which looks like the output of a task apparently accidentally accomplishes the task on the way. I have no idea why merely predicting the next plausible word can mean that the model emits something similar to what I would write down if I tried to summarise an article! That's fascinating! but because it isn't actually setting out to do that there's no guarantee it did that and if I don't check the output will be indistinguishable to me because that's what the models are built to do above all else.

    So I think that's why we to keep them in closed loops with person -> model -> person, and explaining why and intuiting if a particularly application is potentially dangerous or not is hard if we don't maintain a clear separation between the different processes driving human vs llm text output.

  • Sure but it's not like networks get anything from piracy so they have to content themselves with some rather than infinity. Especially for old content, it's just not worth much individually. There's also a looooot of massively overpaid and wasteful people involved in the major networks.

    I know it's not just Netflix but you know, poetic licence or something. also I don't really give a shit about being fair to multibillion dollar corporations that do basically nothing pro social :p