Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IN
Posts
0
Comments
24
Joined
2 yr. ago

  • Presumably CBS/Paramount does have the recordings in a vault somewhere... but it's probably not worth it for them to put them on Paramount+ (Who knows what will happen when they go bankrupt). As it is you're limited to the few episodes individuals bothered to tape, and then digitize.

  • How did you search archive.org? I see several hits on https://archive.org/search?query=subject%3A%22Craig+Kilborn%22

    For example this full episode: https://archive.org/details/kilborn.-0743

    A fair number of them are "streaming only" due to them being some sort of agreement between archive.org and CBS.

    I also see some on Youtube, https://www.youtube.com/watch?v=YR1fJIqpVrI

    BitTorrent does seem to be completely dead. bt4gprx is a very thorough DHT search, so it will find basically anything on a public server that's actually being seeded, https://bt4gprx.com/search?q=Craig+Kilborn and it has no hits for the Late Late show, (but a few other appearances by Craig Kilborn on other shows.)

  • Older Boox's weren't certified for the Play store, so you couldn't run play store apps, but that hasn't been true for a while. You can run pretty much any Android app (though many don't work well on e-ink), and the older Boox's run older Android versions that aren't compatible with many apps in the Play store even if they can connect.

    I think you're referring to "koreader" which started life as alternative kobo e-reader firmware, but now has an android port, but it just runs as an android app, that's what I run on my Boox Palma, but if it reboots, I have to relaunch koreader.

  • I do this, but with keepass (keepass on all devices and then sync with nextcloud). Saved my butt a few times, I can go into the file history and pull an old version of the keepass db out of it, and then keepass has a merge feature, so I can pull the old file out, and merge with current to find missing records.

    Anyway... backups good.

  • worldcat lists the institutions with this book https://search.worldcat.org/title/1048005393, you'll probably have to show up in person. You may be able to do an inter-library loan if your local library doesn't have the book, you could get it from a sister library. However, the only public library that lists having the book is the Austin Public library in Austin, Texas.

  • Right, only one side of the connection needs an open port (and most clients will let that be either seed or leech side)... this is why having an open port on your end is useful if you're downloading, since you can download from seeders that don't have an open port.

  • If you want the instances to sync, you just need to sync the directory. (I currently use nextcloud sync for this, but in the past I used synching and before that btsync)

    If they dont try to modify the files at the same time (with sync delay) there won't be any issues. If they do grow out of sync, you can fix pretty easily with db repair.

  • the mastodon spam that's mostly died down... we'll see how that ends up. we saw similar things where admins got told to turn on approvals for all new accounts... which isn't super scalable.

    Anyway, Fediverse is finally important enough to send spam to, we'll see how well devs can make solutions for spam on a federated platform.

    But yeah generally agree, there's no conspiracy here, it's just fighting spam is hard... and having an account with X amount of Karma is hard for a bot to pull off (without leaving a really obvious paper trail of bots upvoting each other)

  • Something like that is probably technically possible, but you'd need to do a bunch of work.

    Plex Plugins can't provide media sources anymore, so you need to do the trick plex_debrid is doing where you add stub sources to the plex server library and serve the files from a virtual filesystem.

    You might be able to re-use the plex_debrid code but use youtube-dl instead of rclone

  • When the program aired originally, VH1/Viacom would have bought time-limited, media-specific licenses (i.e. you can play my song on cable tv on this program for 5 years if you give me $x dollars flat fee.)

    If they wanted to release the thing again on a different medium (say internet streaming, or DVD) they'd need to find who owns the rights (it could have changed if the rights were bought or whole companies were bought or whatever) then they need to pay them all more money, for a DVD they could offer like .25 cents per $15 DVD sale or whatever, but for streaming that's a monthly subscription so the royalties all need to be re-evaluated (for ad-supported)

    Anyway, paying lawyers/accountants to sort it all out is an expense in and of itself, (in the like 10s of thousands of dollars range) for like maybe 100s or 1000s of dollars in revenue, and it just doesn't pencil out.

  • This uses ffmpeg under the hood and muxes the file into a .m4a file without transcoding. Basically keeping whatever compression youtube used for the audio (which is some sort of mpeg4 compatible audio, probably depends a little bit)

    This still recompressed, but it's the best you can do using youtube as the source.

    • uploader (almost certainly, but theoretically you could skip this step if you encoded your video well) compresses audio
    • uploader uploads to youtube
    • youtube re-compresses the audio again (almost certainly transcoding into a different codec)
  • Sorry if this sounds combative, but I just don't think I'm understanding what's going on, I can't figure out how this could possibly work.

    How does that even work though? Like... the exported doc is just a web page, it doesn't have any google watermarks (except the now invisible ones) marking it as a google web page.

    If it's hosted on an external domain... it doesn't have the google domain in the URL bar either...

    Like how is the scam victim fooled vs a normal web page with the same information... How is a google docs HTML export visually different from a LibreOffice or Microsoft Office HTML export in a way that tricks the scam victim into thinking it's legitimately from Google and therefore laundering the scammers reputation through Google. Like I know scam victims are generally distracted or otherwise not thinking clearly (or just dumb), but how does this work?

    Besides the default font basically any Word Processor HTML export looks the same to a layman, it's plain black text on a white background with 1in margins. If scam victims trust plain white backgrounds and simple formatting there's a ton of ways to achieve that effect that bypass Google.