Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BR
Posts
22
Comments
2,107
Joined
1 yr. ago

  • First, I am not a Russia fan or apologist.

    …But the Soviets made some good shit, often with the philosophy of “big and simple,” but often well engineered, too. Soyuz has been so reliable it’s unreal, hence it sent astronauts from around the world to space for decades because nothing else was dependable enough.

    They did tons of real, oldschool nuclear testing, not simulations like newer powers. They knew what they were doing.

    Hence, asserting most of Russia’s warheads are duds is quite an assumption. It’s quite possible. But there’s enough of a track record for the threat to be very real.

  • People have been saying this for a decade, including HW outlets that have long gone under.

    …And yet they’re bought up like catnip. So is the Intel B580 and AMD 9000 series, apparently, as NOTHING IS IN STOCK.

    The GPU market is so screwed up. We’ll be fed what we like at any price, apparently.

    That being said, in hindsight, Intel must be kicking themself for choosing TSMC for Arc. Even if they used 40% more power on Intel fabs (and I don’t think they would), at least they’d have their own production.

  • I love how this exact speech is interpreted as pro-Trump, depending on where you read it:

    https://redstate.com/bobhoge/2025/05/30/jp-morgan-ceo-jamie-dimon-brings-fire-to-econ-conferenceits-the-enemy-within-that-could-doom-us-n2189845

    And the “other side” is digging up all these instances he’s super liberal (mostly true) while Lemmy here is finding his conservative bootlicking (also mostly true).

    This is peak 2020s. Like, when I’m old and the world is a smoking crater, I will think back to now, when the internet (mostly engagement-driven Big Tech) drug basically the whole planet into the 7th level of information hell.

  • I have lost track of them, lol. Isn’t that just SE underneath…

    I think I inherited AE too, somehow. I dunno, honestly I haven’t touched any BGS game in a while because other RPGs I’ve been playing (2077, KCD2, even the GOTG game) make them feel dated.

    Like, with KCD2, I keep thinking if I had witnessed this as a kid in love with Oblivion, it would have blown my mind, while Skyrim would feel similar and Starfield… kinda dull?

  • You can dual boot.

    I find that ideal. I strip Windows down to the bone, turn off defender realtime and some other security measures (which I don’t worry about since Windows can’t read my Linux partitions), and it makes games fly.

    And use Linux for basically anything else.

    No messing around with stuff that doesn’t want to work on either OS. And Linux is so much easier/faster for so many things (like anything GPU compute, or dealing with media, or Java games like Minecraft/Starsector).

  • Start with Strange New Worlds. If you like The Expanse (and I love The Expanse), you’ll like it. It’s gorgeous, it’s brand new, it feels modern, but it’s as Star Trek as the original series.

    Then go to Deep Space 9 once you’re used to the water.

    Please don’t ban me Lemmy admins, but… you’ll have people tell you to start with TNG. Don’t. It does feel old, heh.

  • I dunno. From my more isolated perspective on GitHub and small LLM testing circles, I see a lot of 3090s, 4090s, sometimes arrays of 3060s/3090s or old P40s or MI50s, which people got basically for the purpose of experimentation and development because they can't drop (or at least justify) $5K.

    They would 100% drop that money on at least one 7900 48GB instead (as the sheer capacity is worth it over the speed hit and finickiness), and then do a whole bunch of bugfixing/testing on them. I know I would. Hence the Framework Strix Halo thing is sold out even though it's... rather compute-lite compared to a 3090+ GPU.

    It seems like a tiny market, but a lot of the frameworks/features/models being developed by humble open source devs filter up to the enterprise space. You'd absolutely see more enterprise use once the toolkits were hammered out on desktops... But they aren't, because AMD gives us no incentive to do so. A 7900 is just not worth the trouble over a 3090/4090 if its VRAM capacity is the same, and this (more or less) extends up and down the price ranges.

  • WRT pricing, I’m pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side

    I'm not as sure about this, but seems like AMD is taking a fat margin on the MI300X (and its sucessor?), and kinda ignoring the performance penalty. It's easy to say "build it yourself!" but the reality is very few can, or will, do this, and will simply try to deploy vllm or vanilla TRL or something as best they can (and run into the same issues everyone does).

    The 'enthusiast' side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia's VRAM cartel pricing when they have absolutely no reason to. It's completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).

    The biggest culprit from what I can gather is that AMD’s GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.

    Yeah, it does seem divorced from the CPU division. But a lot of the badness comes from business decisions, even when the silicon is quite good, and some of that must be from Austin.

  • AMD has basically gone the “build it and they will come” attitude

    Except they didn't.

    They repeatedly fumble the software with little mistakes (looking at you, Flash Attention). They price the MI300X, W7900, and any high VRAM GPU through the roof, when they have every reason to be more competitive and undercut Nvidia. They have sad, incomplete software efforts divorced from what devs are actually doing, like their quantization framework or some inexplicably bad LLMs they trained themself. I think Strix Halo is the only GPU compute thing they did half right recently, and they still screwed that up.

    They give no one any reason to give them a chance, and wonder why no one comes. Lisa Su could fix this with literally like three phone calls (remove VRAM restrictions on their OEMs, cut pro card prices, fix stupid small bugs in ROCM), but she doesn't. It's inexplicable.

  • Depends on the quantization.

    7B is small enough to run it in FP8 or a Marlin quant with SGLang/VLLM/TensorRT, so you can probably get very close to the H20 on a 3090 or 4090 (or even a 3060) and you know a little Docker.

  • No offense, but it feels a little late in the game's life cycle to hit "critical mass" for modding. I mean, I guess it has a long sales tail and other adaptations will drive people to the game.

    Still, this is good! Better now than never.