Skip Navigation

User banner
π˜‹π˜ͺ𝘳𝘬
π˜‹π˜ͺ𝘳𝘬 @ Dirk @lemmy.ml
Posts
10
Comments
1,670
Joined
2 yr. ago

  • It's daily 5 minutes, but it sums up to an hour every 3 weeks (assuming a 5 days work week).

  • Also: If someone manages to tamper with the downloadable ISO … they likely will be able to tamper with the signature files, too.

  • Mmh, okay. So I'll continue re-downloading videos in non-HDR variants. But good to see it implemented, though.

  • I'm not following Linux drama, sorry.

  • So no more dark and dull looking videos?

  • but I’d like to give Nginx Proxy Manager a try, it seems easier to manage stuff not in docker.

    NPM is pretty agnostic. If it receives a request for a specific address and port combination it just forwards the traffic to another specific address and port combination. This can be a docker container, but also can be a physical machine or any random URL.

    It also has Let's Encrypt included (but that should be a no-brainer).

  • If you're into watching YouTube: You can add channels as RSS into your reader. The latest 15 videos are offered via the feeds. All you need is the channel ID of the channel whose feed you want to access.

    The channel ID is not visible anywhere on the page, but if you look at the DOM in the web browser via the developer console, you will find a meta entry <link rel="canonical" href="https://www.youtube.com/channel/CHANNEL_ID"> in the <head>, where CHANNEL_ID is the required ID. There are also websites that can be found quickly and easily using the appropriate keywords, which read out and return the ID associated with the provided handle.

     
        
    https://www.youtube.com/feeds/videos.xml?channel_id=CHANNEL_ID
    
      

    If you have a lot of subscriptions, you can use Google Takeout at takeout.google.com and export the YouTube subscriptions as a CSV file. The CSV file contains the subscribed channels with their ID and title for you to parse into whatever format you need for your reader.

    For Newsboat you can use this script on the Abos.csv from my Google Takeout archive:

     bash
        
    while IFS="," read id url name; do
      feedURL="https://www.youtube.com/feeds/videos.xml?channel_id=%24%7Bid%7D"
      [ ! -z "${id}" ] && echo "$feedURL youtube videos \"~${name}\""
    done < <(tail -n +2 Abos.csv) >> urls
    
      

    Edit: Seems like, Lemmy messes up the code formatting, but you get the gist ...

  • QC was such a fun ride…

    It clearly had it's moments. There were some weirdly questionable strips. I'm not following it anymore since a few years but I'm happy to see it's still running.

  • He has unlimited money. I am pretty sure he doesn't really care.

  • Here in Germany we learn that in school in 3rd or 4th grade (ca 9-10 years of age).

  • Unfortunately no-one does. Since Google basically killed it, it gets ignored everywhere.

  • Free public education and healthcare are awesome, too. It really shows how much we as a society have grown and left behind the dark ages where those were for the rich only.

  • Except it is, because the prompt doesn’t write itself. The art is writing the β€œperfect prompt” for getting the desired result.