Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PE
Posts
0
Comments
215
Joined
2 yr. ago

  • I've tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.

  • I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.

    I can only see this get worse.

  • I'm doing that with docker compose in my homelab, it's pretty neat!

     
        
    services:
      ollama:
        volumes:
          - /etc/ollama-docker/ollama:/root/.ollama
        container_name: ollama
        pull_policy: always
        tty: true
        restart: unless-stopped
        image: ollama/ollama
        ports:
          - 11434:11434
        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  device_ids: ['0']
                  capabilities:
                    - gpu
    
      open-webui:
        build:
          context: .
          args:
            OLLAMA_BASE_URL: '/ollama'
          dockerfile: Dockerfile
        image: ghcr.io/open-webui/open-webui:main
        container_name: open-webui
        volumes:
          - /etc/ollama-docker/open-webui:/app/backend/data
        depends_on:
          - ollama
        ports:
          - 3000:8080
        environment:
          - 'OLLAMA_BASE_URL=http://ollama:11434/'
          - 'WEBUI_SECRET_KEY='
        extra_hosts:
          - host.docker.internal:host-gateway
        restart: unless-stopped
    
    volumes:
      ollama: {}
      open-webui: {}
    
      
  • I spun up a new Plex server with a decent GPU - and decided to try offloading Home Assistant's Preview Voice Assistant TTS/STT to it. That's all working as of yesterday, including an Ollama LLM for processing.

    Last on my list is figuring out how to get Home Assistant to help me find my phone.

  • I read the comment as saying return to the office was leading to sedentary behaviors - which I would believe. My commute is 45 minutes each way in ideal traffic - that's an additional hour and a half of just sitting for days I have to go into the office compared to my work from home days.

  • I (unfortunately) am in the market for a new-to-me car and I'm so pissed that my only options are either 15+ year old vehicles or spyware on wheels.

    I may not rip out any antennas, but I'm damn sure going to try and interfere with them somehow.

  • The fact that this administration is flat out refusing to comply with judicial decisions. This is literally the "you and whose army" stage of our trip down authoritarian hell where the executive dares the judicial to enforce their decisions.