Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TO
Posts
5
Comments
188
Joined
2 yr. ago

  • I have a Raspberry Pi 3 with a Hifiberry DAC running OSMC (nicely packaged Kodi on top of Debian) acting as my media center and recently installed Jellycon with the hopes of being able to use server side transcoding for a few formats my old TV doesn't support.

    My verdict: Menu navigation is slow, but it's a native kodi integration (supports widgets) and playback works great once you made your way through the menus. You can selectively set transcoding options per file type which is exactly what I needed.

    Best solution I've seen so far, as it also does IR remote passthrough over HDMI if your TV supports it. The addon works in any kodi setup of course. I think there might be a way to start playback from the Jellyfin web UI but haven't bothered with it. This would fully remedy the menu slowness, I think.

  • Is that a way of saying you think he's wrong?

    I thought the book had an interesting core idea, even if his grasp on technology seems rather loose and I really disliked the literary device he used to explain said idea.

    What's your take on it?

  • Right, I could have been more precise. I'm talking about security risk, not resilience or uptime.

    "It’ll probably be the most secure component in your stack." That is a fair point.

    So, one port-forward to the proxy, and the proxy reaching into both VLANs as required, is what you're saying. Thanks for the help!

  • One proxy with two NICs downstream? Does that solve the "single point of failure" risk or am I being overly cautious?

    Plus, the internal and external services are running on the same box. Is that where my real problem lies?

  • I was thinking training montage, with Eye of the Tiger and everything.

    In all seriousness, picture your dude's face! He will have forgotten all about that bet (he might have even now) and one regular sunny day you CASUALLY walk on over to that conveniently located stage; "hold that drink for me for a second, honey", and BAM. He won't know what's even happening, crying into both of your milk shakes in joy and confusion.

    Plus, you'll be super buff. There's no downside, really.

  • selfh.st

    selfh.st is an independent publication created and curated by Ethan Sholly. [...] selfh.st draws inspiration from a number of sources including reddit's r/selfhosted subreddit, the Awesome-Selfhosted project on GitHub, and the #selfhosted/#homelab communities on Mastodon.

    and also

    This Week in Self-Hosted is sponsored by Tailscale, trusted by homelab hobbyists and 4,000+ companies. Check out how businesses use Tailscale to manage remote access to k8s and more.

    awesome-selfhosted.net

    This list is under the Creative Commons Attribution-ShareAlike 3.0 Unported License. Terms of the license are summarized here. The list of authors can be found in the AUTHORS file. Copyright © 2015-2024, the awesome-selfhosted community

  • I remember the software being very janky, too. But then again, that was Windows 95 days. 😅

    Monkey island for hours and hours. Man I had a good time with that PC. Thanks for bringing back those memories.

  • Tape drive! I had one of those back in the mid to late 90s, salvaged from my dad's dead office PC. I was around 10, and the fact that it worked to take a part from a machine and put it into another, as well as the absolutely insane storage capacity of the tapes... felt like magic. No clue how I knew what to do, either, but it worked.

    Edit. Hazy on the specs, but I think it would have been a Pentium 1 (166MHz) with 16MB RAM, and 1.2GB HDD seems about right. Played the heck out of Rayman on that.

  • Here's the docker stats of my Nextcloud containers (5 users, ~200GB data and a bunch of apps installed):

    No DB wiz by a long shot, but my guess is that most of that 125MB is actual data. Other Postgres containers for smaller apps run 30-40MB. Plus the container separation makes it so much easier to stick to a good backup strategy. Wouldn't want to do it differently.

  • This is the setup I have (Nextcloud, Keepass Desktop, Keepass2android+webdav) and k2a handles file discrepancies very well. I always pick "merge" when it is informing me of a conflict on save. Have been using it like that for years without a problem.

    Edit: added benefit, I have the Keepass extension installed in my Nextcloud, so as long as I can gain access to it, I have access to my passwords, no devices needed.

  • Page loading times, general stability. Everything, really.

    I set it up with sqlite initially to test if it was for me, and was surprised how flaky it felt given how highly people spoke about it. I'm really glad I tried with postgres instead of just tearing it down. But my experience is highly anecdotal, of course.

  • Slow and unreliable with sqlite, but rock solid and amazing with postgres.

    Today, every document I receive goes into my duplex ADF scanner to scan to a network share which is monitored by Paperless. Documents there are ingested and pre-tagged, waiting for me to review them in the inbox. Unlike other posters here, I find the tagging process extremely fast and easy. Granted, I didn't have to bring in thousands of documents to begin with but started from a clean slate.

    What's more, development is incredibly fast-moving and really useful features are added all the time.