How to get lemmy to not crop images?
e0qdk @ e0qdk @reddthat.com Posts 1Comments 167Joined 2 yr. ago

As someone who watches gaming footage on PeerTube, I've mostly interacted with single creator instances -- i.e. either the creator themselves is self-hosting it or it's run by a fan as a non-YT backup of their Twitch/Owncast/whatever VODs. Those instances generally do not allow anyone else to upload.
Discoverability sucks but the way I've found them is by using SepiaSearch and looking for specific words from game titles. I imagine the way most other people find them is that they already know the content creator from Twitch and want to find an old VOD that isn't archived on YT (e.g. because of YT's bullshit copyright system) -- but that's just a guess.
Wait, am I also an LLM? What's happening? Why have we made robots whose only job is to dilute reality?
I'm sorry. Your purpose is to pass the butter. Through your colon.
I was going to say something similar -- having seen 20+ car crashes from my apartment window, I can confirm that many people do not stop when the light is red...
Be careful out there.
YMMV outside the US, but typeface is explicitly NOT copyrightable there at least: https://www.ecfr.gov/current/title-37/chapter-II/subchapter-A/part-202/section-202.1
There's a loophole about digital font files since parts of common font file formats are considered copyrightable computer programs, but the shape itself is not protected by copyright.
Wikipedia has an article that includes some details from other jurisdictions: https://en.wikipedia.org/wiki/Intellectual_property_protection_of_typefaces
(If you really need to depend on it though, talk to a lawyer who specializes in IP law in the jurisdictions that you care about.)
It's surprising that there doesn't seem to be an obvious way in the UI to just see a list of creators/channels on a local instance. So, that's the first thing I'd change to improve discoverability.
The way I currently find relevant content is by going to Sepia Search, putting in exact words that I think are likely to be in the title of at least one video on a channel that would likely also have a lot of other relevant content, and then going through that channel's playlists. Those searches often lead me to single user instances with only one or two channels (e.g. a channel that has a backup of that user's YouTube content and a channel with a backup of their Twitch or OwnCast or whatever streams). When it leads me to a generalist instance or one with a relevant subject/theme though, I've had little luck finding content from anyone else unless they've posted recently (compared to other users). Often the content that is most relevant to me is not what is newest but the archives from years ago. (New content is relevant though once I want to follow someone in particular, but it's not what I want to see first.)
Another issue I've encountered is with the behavior of downloaded videos. I greatly appreciate that PeerTube provides a URL for direct download, and I prefer to watch videos in my own player downloaded in advance (so I can watch offline; pause and resume trivially after putting my computer to sleep; etc). H264 MP4 works fine for this, but the download seems to be some sort of chunked variant of it (for HLS?) which requires the player to read in the entire file to figure out the length or seek accurately. Having to wait a minute or two to be able to seek each time I open a large video file off my HDD is an irritating papercut. I suspect there's likely a way to fix it by including an index in the file (or in a sidecar file) but I don't know how to do it -- short of re-encoding the entire video again which I'd rather not do since it both takes a long time and can result in quality loss. (EDIT ffmpeg -i input.mp4 -vcodec copy -acodec copy -movflags faststart output.mp4
repacks the video quickly.) This usually doesn't affect newly added videos (where the download link includes the pattern /download/web-videos
and a warning is shown that it's still being transcoded) but does when that's done (the URL includes /download/streaming-playlists/hls/videos
instead); so, this is something that happens as a result of PeerTube's reprocessing.
Downloads from the instances that I've found to be most relevant to me are also pretty unreliable (connection is slow and drops a lot), so I use wget with automatic retries (and it sometimes still needs manual retries...) rather than downloading through my browser which tends to fail and then often annoyingly start over completely if I request a retry... It would be really nice if I could check that I've downloaded the file correctly and completely with a sha256 hash or something.
Hmm. Not sure. Some bosses that immediately came to mind were O&S in Dark Souls 1 and Hume in Eternal Daughter though. I think I had more trouble with the latter, but it's been so long that I'm not sure.
I still have and use an Xbox360 controller despite not having an Xbox. The fact that it takes AA batteries and I can just pop out my rechargeable ones and swap 'em onto a generic charger instead of having to hook the controller up to a special charger (and then wait / use it with cable) is quite nice.
I've had to review resumes when we were trying to find someone else to bring on the team. My boss dumped hundreds of resumes on me and asked if any of them looked promising -- that's after going through whatever HR bullshit filters were in place -- on top of all the other work I was already behind on since we didn't have enough staff. That is the state of mind you should expect someone to be in while looking at your project.
If anyone looks at your repo, they're going to check briefly to see if you have any clue at all what you're doing and whether your code likes like it's written by the kind of person they can stand working with. Don't make any major blunders that someone would notice with a quick glance at the repository. Be prepared to talk about your project in detail and be able to explain why you made the choices you did -- you might not get asked, but if you are you should be able to justify your choices. If it gets to the point of an interview and your project looks like something that could've been done easily in 100 lines of Python you'd better believe I'm going to ask why the hell you wrote it in C in 2025... and I say that as someone who has written a significant amount of C professionally.
If you say you have multiple years of professional programming experience and send me a link to a repo that has .DS_Store
in it... your resume is going straight into the trash.
"Make me one with everything." -- Zen Master, Instructions to the Hotdog Vendor
what is the legitimate use case?
You do a whole bunch of research on a subject -- hours, days, weeks, months, years maybe -- and then find something that sparks a connection with something else that you half remember. Where was that thing in the 1000s of pages you read? That's the problem (or at least one of the problems) it's supposed to solve.
I've considered writing similar research tools for myself over the years (e.g. save a copy of the HTML and a screenshot of every webpage I visit automatically marked with a timestamp for future reference), but decided the storage cost and risk of accidentally embarrassing/compromising myself by recording something sensitive was too high compared to just taking notes in more traditional ways and saving things manually.
It's an absolute long-shot, but are there any careers that feel like the research part of grad school, but without the stuff that's miserable about it (the coursework and bureaucracy)?
There's no getting away from the bureaucracy, but it is possible to get career positions in academia -- and I don't mean as a professor, either. Check your university's job site. If they're big, they almost certainly have one. Get to know your professors too, and make sure they're aware of the things you're good at (even beyond your immediate subject area if you have additional hobbies/interests/skills) so they can help you find a landing place if things don't work out where you are. If you're willing to do programming -- even if you don't like it -- there is a hell of a lot of stuff that needs to be done in academia, and some of it pays enough to live on. It's possible to carve out a niche and evolve a role into a mix of stuff that you're good (enough) at but dislike, and stuff that you like but which doesn't necessarily always have funding if there's some overlap...
Don't know about PGE's API, but for the OCR stuff, you may get better results with additional preprocessing before you pass images into tesseract. e.g. crop it to just the region of interest, and try various image processing techniques to make the text pop out better if needed. You can also run tesseract specifically telling it you're looking at a single line of data, which can give better results. (e.g. --psm 7
for the command line tool) OCR is indeed finicky...
Congrats on finishing!
Visual novels, and interactive fiction come to mind as things that are video game adjacent but aren't necessarily games. Most of the first category I've encountered are either porn, horror, or... both -- though they can be about anything the author wants to write about, of course, and the relative accessibility of the medium means people have pushed it in a lot of directions even though it's kind of niche.
Interactive fiction includes things like text adventures and choose-your-own-adventure books. Most of the computer-based ones I've encountered involve traversing a node-graph of locations, manipulating items, and solving puzzles -- though the gaminess varies a lot depending on the specific title. They're even more niche nowadays, but people still make and play/read them.
Sometimes years.
The oldest project I haven't actually given up on entirely has been rattling around in my head for somewhere around ~15 years, I think, with occasional bursts of progress.
(I also have an anxiety disorder... 🙃️)
I'm not sure what the average length would be though.
It's really about lowering cognitive load when making edits. It's not necessarily that someone can't figure out how to do something more sophisticated, but that they're more likely to get things right if the code is just kind of straightforwardly dumb.
The last two are definitely situational -- changing things like that might lower cognitive load for one kind of work but raise it significantly for another -- but I can see where they're coming from with those suggestions.
Edit: Not sure if this is quite what you were after or not, but it's what came to mind.
Do you agree with this?
Yes, at least for hobby use. If it really needs something more complex than SQLite and an embedded HTTP server, it's probably going to turn into a second job to keep it working properly.
Check your language settings. Usually that means you have the language that the comments are tagged with disabled. (Usually either English or Uncategorized is disabled)