Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DA
Posts
1
Comments
139
Joined
2 yr. ago

  • ...

    In his notes, Roszak wrote that Google's search advertising "is one of the world's greatest business models ever created" with economics that only certain "illicit businesses" selling "cigarettes or drugs" "could rival."

    ....

    Beyond likening Google's search advertising business to illicit drug markets, Roszak's notes also said that because users got hooked on Google's search engine, Google was able to "mostly ignore the demand side" of "fundamental laws of economics" and "only focus on the supply side of advertisers, ad formats, and sales." This was likely the bit that actually interested the DOJ.

  • Elm (for frontend). https://elm-lang.org/

    Nothing is as easy to refactor, maintain, add new features to, work with after a gap, nothing else is as crashless and rock solid.

    No compiler is a fast, friendly, helpful and insightful. Seriously. You don't wait for the compiler. It's instant even on huge code bases. And the resulting output outperforms other major frameworks.

    Its syntax is weird at first (even stranger than python) and the autoformatter is mad keen on blank lines but after a while it's just so clear and easy to follow.

    You have to let go of your object oriented mindset and stop trying to turn everything into objects and components but everything I hated about maintaining old code evaporated once I did. I used to believe that objects detangled code, I don't know why I continued to believe that despite the evidence, because apart from pretty small and simple things, OO code gets extremely tangled. Elm is absurdly easy to refractor, so you just do.

    It's genuinely nice to add new features to old code, something I've never experienced before in a few decades of programming.

    The elm slack is also a very helpful place indeed and you usually get a lot of support pretty quickly.

    Adding the link to their front page, I see they call it "A delightful language for reliable web applications" and the first claim is "no runtime exceptions". I remember thinking that was marketing BS but being intrigued by the bold claim. A few years later and I can honestly say that that accurately describes my experience.

    These last few years I've rediscovered the joy of coding.

  • One of the first things they did when the coalition came to power was cancel the Building Schools for the Future programme, and the construction industry took a massive hit, resulting in an awful lot of pointless unemployment. This article claims that that stuff was exactly what Gordon Brown had planned. I'm about as skeptical about that claim as I am about anything that came from the holes of former Prat Clown Boris Johnson or former Presidunce Donald Trump.

  • You are clearly a Very Intelligent Expert and a Wise and Knowledgeable person, so I must bow to your greater, deeper and fuller understanding.

    I was weak, pathetic and stupid for thinking that the safety concerns this raises were more important than the technicalities of this individual case. Please accept my humble apologies. I'm sure you'll have further corrections for my naive fumblings and I await your Academic Input eagerly.

  • I'm not sure that's an important question. In my view, even if it turned out correct, "This particular victim would have died anyway, so delaying emergency vehicles is fine" is a logical fallacy, an ethical error and a failure of empathy.

  • Well, sounds like you're well on your way to hand-rolling your own product comparison tool that's Powered By AI TM. You could make a popular price comparison site that initially filters out all that cruft and just gives you simple, clear, easy to read information about products.

    Version 2 could have handy links to the cheapest websites.

    Once it gets super popular you could offer retailers the chance to ensure their products and prices are correct. Perhaps a nice easy AI powered upload where you dump the info on whatever format you like, check it's understood and go live.

    You could later offer retailers the chance to host a store front with you, or maybe allow initially just one or two, very tasteful, clearly marked-as-advertisment links for strictly AI-sanctioned relevant upselling, you know, offer the warranty with the product, or the printer with the fancier ink alongside the ones that exactly matched the criteria.

    Once your engagement with retailers is strong, and they know they'll be missing out on a lot of custom, you can start maximising your income from them.

    Or, wait did this whole cycle repeat itself many times over with many websites and many corporations?

    Enshitification is real, and it's already AI powered. We don't know exactly why what's in front of us when we're online is the thing that is most likely to get us to keep scrolling and clicking and purchasing and maximising profits, but it's reasonable to assume that on a lot of successful websites, some sort of AI system chose it for exactly those purposes.

    It's nice that you feel AI will get us away from the power of the multinational corporations, but I think it's vastly more likely that the AI we use will fall under their control and they will be twenty steps ahead of us. They were the ones who popularised it in the first place!

    (Personally, I tend to use some reviewing sites that I trust and in particular for phones, a spec agreggator so I can filter out the five year old products that amazon is offering me.)

  • You know that a LLM is a statistical word prediction thing, no? That LLMs "hallucinate". That this is an inevitable consequence of how they work. They're designed to take in a context and then sound human, or sound formal, or sound like an excellent programmer, or sound like a lawyer, but there's no particular reason why the content that they present to you would be accurate. It's just that their training data contains an awful lot of accurate data which has a surprisingly large amount of commonality of meaning.

    You say that the current crop of LLMs are good at Wikipedia style questions, but that's because their authors have trained them with some of the most reliable and easy to verify information on the Web. A lot of that is Wikipedia style stuff. That's it's core knowledge, what it grew up reading, the yardstick by which it was judged. And yet it still goes off on inaccurate tangents because there's nothing inherently accurate about statistically predicting the next word based on your training and the context and content of the prompt.

    Yes, LLMs sound like they understand your prompt and are very knowledgeable, but the output is fundamentally not a fact-based thing, it's a synthesized thing, engineered to sound like its training data.