Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BR
Posts
22
Comments
2,107
Joined
1 yr. ago

  • Yeah. But what I’m getting at is the economics may hint at what valve is planning.

    Maybe AMD isn’t making a “specialized” monolithic die like Van Gogh? Perhaps Valve is simply customizing blocks of AMD’s existing product (die) stack, which is more financially plausible.

    AFAIK one of the current issues with Strix Halo for a handheld would be high idle power, but maybe the next generation is better in that respect.

  • ...once the tech has advanced enough that putting one together makes for a substantial boost in what you get for the same price and power envelope.

    It already does.

    Its more a question of economics of scale. Taping out a single custom chip is extremely expensive, like hundreds of millions of dollars before a single chip is sold.

    AMD could make a custom Strix Halo SKU for Valve (think a 6-core X3D CCD, a 32-40CU GPU clocked low for efficiency) for much less. Perhaps something like that (a custom multi-die configuration of Strix Halo's successor?) is what Valve opted for.

  • Valve is getting first dibs at the next AMD SoC as far I’ve heard

    This is huge if true, as Van Gogh (the Deck chip) was a seperate "line" than all the overly CPU-heavy laptop chips other handhelds are using at the moment.

  • The problem predates the AI craze. SEO slop started snowballing before.

    Highly recommend reading this: https://www.wheresyoured.at/the-men-who-killed-google/

    Non-Google search engines are kinda the in-progress solutions, as the sheer volume of stuff to block is likely intractable. Google has the power to fight it themselves, but, well, see the writeup...

  • Oh yeah, you will run into a ton of pain sampling random projects on AMD/Intel. Most "experiments" only work out of the box on Nvidia. Some can be fixed, some can't.

    A used 3090 is like gold if you can find one, yeah.

    And yes, I sympathize with Nvidia being a pain on linux... though it's not so bad if you just output from your IGP or another card.

    And yes, stuff rented from vast.ai or whatever is cheap. So are APIs. TBH thats probably the way to go if budget is a big concern, and a 24GB B60 is out of the cards.

  • The better choice depends on which software stack you intend to wrangle with, how long you intend to keep the cards, and your usage patterns, but the B580 is the sorta newer silicon.

    Exllamav3 is the shit these days (as you can fully offload 32Bs in 16GB with very little loss), and it's theoretically getting AMD support way before Intel (unless Intel changes that).

    ...Also, 2x 3060s may be a better option, depending on price.

  • Prices for AMD/Nvidia (except maybe a used AMD 7900 XTX) are so awful that this is still a good deal, no matter how much bandwidth it has. For pure text LLM usage, capacity is king.

    Intel's hands are tied buy what silicon they have available, unfortunately.

  • Maser drills: https://newatlas.com/energy/geothermal-energy-drilling-deepest-hole-quaise/

    In a nutshell, it’s a economically brilliant idea: take hand-me-down microwave(ish) spectrum lasers from fusion research, drill holes deep into the crust (leaning on the fossil fuel industry), then hook up the resulting steam to existing coal plants, so you don’t have to build anything else. The coal plant gets free geothermal fuel, they move onto the next site: everyone wins.

    It’s taking a worryingly long time though. I hope it gets enough funding.

  • Permanently Deleted

    Jump
  • Certain subreddits used to be like this.

    But all my favorites have taken one of two paths:

    • Get algorithmically deprioritized (due to a “bug” as an admin told a mod), and hemorrhage users. The collective ‘intelligence’ of the sub in particular drains; interesting intellectual discussions are gone. One such example: /r/localllama
    • The sub gets huge. Bots repost memes and bait as attention farms. It doesn’t feel like a small town anymore. Deeper discussions drain away in favor of shallow repetition of the same things over and over again. One example of this for me is /r/thelastairbender.
  • The base M4 is a very small chip with a modest memory config. Don’t get me wrong, it’s fantastic, but it’s more Steam Deck/laptop than beefy APU (which the M4 Pro is a closer analogue to).

    $1200 is pricey for what it is, partially because Apple spends so much on keeping it power efficient, rather than (for example) using a smaller die or older process and clocking it higher.