Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FU
Posts
28
Comments
774
Joined
2 yr. ago

    • Loading dishwashers properly requires an official government license and a test.
    • Putting a flat plate in front of a bowl means a year of hard labor.
    • Loading any of the good kitchen knives is an automatic 10 years.
  • Back in college, a local doughnut shop would open at 3am to start making the run for the day. If you were stoned or pulling an all-nighter, it was worth the 15m walk. They would sell you fresh, warm doughnuts out the back.

    Nowadays, a lot of places do the fresh doughnut thing. But it's not the same, getting it at 3am.

  • https://www.espressif.com/en/news/ESP32-S3-BOX-3

    There's a model with a more expensive dock, or one without. The one without worked fine. But it had to be the Box 3 not Box 2. It worked pretty well and you could create custom images to indicate whether it was listening, thinking, etc.

    Instructions here: https://www.home-assistant.io/voice_control/s3_box_voice_assistant/

    The box isn't powerful enough to run an LLM itself. It's just good enough as an audio conduit. You can either use their cloud integration with ChatGPT, or now, Anthropic Claude. But if you had a powerful Home Assistant server, say an Nvidia Jetson or a PC with a beefy Nvidia GPU, you could run local models like Llama and have better privacy.

    This is from earlier this year. I imagine they've advanced more since then.

  • I've been using ChatGPT, specialized ones on Huggingface, and a bunch of local ones using ollama. A colleague who is into this deep says Claude is giving him best results.

    Thing is, depends on the task. For coding, I've found all suck. ChatGPT gets you up to a point, then puts out completely wrong stuff. Gemini, Microsoft, and CodeWhisperer put out half-baked rubbish. If you don't already know the domain, it will be frustrating finding the bugs.

    For images, I've tried DALL-E for placeholder graphics. Problem is, if you change a single prompt element to refine the output, it will generate completely different images with no way to go back. Same with Adobe generators. Folks have recommended Stability for related images. Will be trying that next.

    Most LLMs are just barely acceptable. Good for casual messing around, but I wouldn't bet the business on any of them. Once the novelty wears off, and the CFOs tally up the costs, my prediction is a lot of these are going away.

  • A few jobs ago, everyone hated the tech stack. The people who had come up with it had long left. I talked to everyone, then came up with a plan to transition to a modern stack. Got buy-in from management.

    Half the people (and all who had said they hated the status quo) threatened to quit if we made the change.

    Fortunately, it was just in time to collect the 1-year retention bonus. Life's too short. Walked away.

  • Installed RabbitMQ for use in Python Celery (for task queue and crontab). Was pleasantly surprised it also offered MQTT support.

    Was originally planning on using a third-party, commercial combo websocket/push notification service. But between RabbitMQ/MQTT with websockets and Firebase Cloud Messaging, I'm getting all of it: queuing, MQTT pubsub, and cross-platform push, all for free. 🎉

    It all runs nicely in Docker and when time to deploy and scale, trust RabbitMQ more since it has solid cluster support.

  • Once they get Threads support, their target audience will be the non-Twitter universe. This would make it easier for businesses, governments, journalists, and non-technical folks like influencers and celebrities to switch out. That's how you get mass adoption.

    I just tried it last week. Good start. Lots of promise.

  • Since nobody's brought it up: MQTT.

    It got pigeonholed into IoT world, but it's a pretty decent event pubsub system. It has lots lf security/encryption options, plus a websocket layer, so you can use it anywhere from devices, to mobile, to web.

    As of late last year, RabbitMQ started suporting it as a supported server add-on, so it's easy to use it to create scalable, event-based systems, including for multiuser games.