403 on API endpoints
pe1uca @ pe1uca @lemmy.pe1uca.dev Posts 37Comments 239Joined 2 yr. ago
Glad to see you solved the issue, I just want to point out that this might happen again if you forget your db is in a volume controlled by docker, better to put it in a folder you know.
Last month immich released an update to the compose file for this, you need to manually change some part.
Here's the post in this community https://lemmy.ml/post/14671585
Also I'll include you this link in the same post, I moved the data from the docker volume to my specific one without issue.
https://lemmy.pe1uca.dev/comment/2546192
Or maybe another option is to make backups of the db. I saw this project some time ago, haven't implemented it on my services, but it looks interesting.
https://github.com/prodrigestivill/docker-postgres-backup-local
how do I do a fresh install of an older app version on a new device?
You can directly use the files from the github repo, just look for the release your server is on and install that apk.
There's a rumor going that some sites slow their performance on non-chromium browsers, not sure if even on those like opera or brave which are also chromium but with I think they make more customizations.
Other thing could be how much you use each browser, how many tabs you have open when you tried them? How many extensions? What other programs did you have open?
I'm just annoyed by the regions issues, you'll get pretty biased results depending in what region you select.
If you try to search for something specific to a region with other selected you'll find sometime empty results, which shows you won't get relevant results about a search if you don't properly select the region.
Probably this is more obvious with non technical searches, for example my default region is canada-en and if I try "instituto nacional electoral" I only get a wiki page, an international site and some other random sites with no news, only when I change the region I get the official page ine.mx and news. For me this means kagi hides results from other regions instead of just boosting the selected region's ones.
It's regarding appropriate handling of user information.
I'm not sure it includes PII. Basically it's a ticketing system.
The pointers I got are: the software is secure and reliable to store the data and be able to be queried to understand the updates the data had.
it just seems to redirect to an otherwise Internet accessible page.
I'm using authelia with caddy but I'm guessing it could be similar, you need to configure the reverse proxy to expect the token the authentication service adds to each request and redirect to sign in if not. This way all requests to the site are protected (of course you'll need to be aware of APIs or similar non-ui requests)
I have to make an Internet accessible subdomain.
That's true, but you don't have to expose the actual services you're running. An easy solution would be to name it other thing, specially if the people using it trust you.
Another would be to create a wildcard certificate, this way only you and those you share your site with will know the actual sub domain being used.
My advice is from my personal setup, but still all internal being able to remotely access it via tailscale, so do you really need to make your site public to the internet?
Only if you need to share it with multiple people is worth having it public, for just you or a few people is not worth the hassle.
I've read advice against buying used storage unless you don't mind being at more risk of losing the data in there.
Yesterday I started looking for mini pcs and found this post https://www.reddit.com/r/MiniPCs/comments/1afzkt5/2024_general_mini_pc_guide_usa/
They shared this link which contains data on 2.8k machines, it helped me compare some of the options I was looking for and find new ones.
https://docs.google.com/spreadsheets/d/1SWqLJ6tGmYHzqGaa4RZs54iw7C1uLcTU_rLTRHTOzaA/edit
Sadly it doesn't contain data bout the ThinkPad, but I might as well share in case you're willing to consider other brands.
Edit: Oh, wait, I was thinking about a ThinkCentre, not a ThinkPad :P
Well, I'll leave this around in case someone finds it useful, hehe.
It's just a matter of time until all your messages on Discord, Twitter etc. are scraped, fed into a model and sold back to you
As if it didn't happen already
I'd say it depends on your threat model, it could be a valid option.
Still, how are you going to manage them?
A password manager? You'd still be posing the same question: should I keep my accounts in a single password manager?
Maybe what you can do is use aliases, that way you don't expose anywhere the actual account used see your inbox, only accounts to send you emails.
But I tries this and some service providers don't handle well custom email domains (specially government and banking which move slowly to adapt new technology)
I sort of did this for some movies I had to lessen the burden of on the fly encoding since I already know what formats my devices support.
Just something to have in mind, my devices only support HD, so I had a lot of wiggle room on the quality.
Here's the command jellyfin was running and helped me start figuring out what I needed.
/usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -canvas_size 1920x1080 -i file:"/mnt/peliculas/Harry-Potter/3.hp.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:0 -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 5605745 -bufsize 11211490 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -sc_threshold:v:0 0 -filter_complex "[0:3]scale=s=1920x1080:flags=fast_bilinear[sub];[0:0]setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p[main];[main][sub]overlay=eof_action=endall:shortest=1:repeatlast=0" -start_at_zero -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693.m3u8"
From there I played around with several options and ended up with this command (This has several map options since I was actually combining several files into one)
ffmpeg -y -threads 4 \ -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda \ -i './Harry Potter/3.hp.mkv' \ -map 0:v:0 -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0 \ -map 0:a:0 -map 0:a:1 \ -fps_mode passthrough -f mp4 ./hp-output/3.hp.mix.mp4
If you want to know other values for each option you can run ffmpeg -h encoder=h264_nvenc
.
I don't have at hand all the sources from where I learnt what each option did, but here's what to have in mind to the best of my memory.
All of these comments are from the point of view of h264 with nvenc.
I assume you know who the video and stream number selectors work for ffmpeg.
- Using GPU hardware acceleration produces a lower quality image at the same sizes/presets. It just helps taking less time to process.
- You need to modify the
-preset
,-profile
and-level
options to your quality and time processing needs. -vf
was to change the data format my original files had to a more common one.- The combination of
-rc
and-cq
options is what controls the variable rate (you have to set-b:v
to zero, otherwise this one is used as a constant bitrate)
Try different combinations with small chunks of your files.
IIRC the options you need to use are -ss
, -t
and/or -to
to just process a chunk of the file and not have to wait for hours processing a full movie.
Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format
There's no need to have a GPU or a big CPU to run these commands. The only problem will be the time.
Since we're talking about preprocessing the library you don't need real time encoding, your hardware can take one or two hours to process a 30 minutes video and you'll still have the result, so you only need patience.
You can see jellyfin uses -preset veryfast
and I use -preset p7
which the documentation marks as slowest (best quality)
This is because jellyfin only process the video when you're watching it and it needs to process frames faster than your devices display them.
But my command doesn't, I just run it and whenever it finishes I'll have the files ready for when I want to watch them without a need for an additional transcode.
I think you have two options:
- Use a reverse proxy so you can even have two different domains for each instead of a path. The configuration for this would change depending on your reverse proxy.
- You can change the config of your pihole in
/etc/lighttpd/conf-available/15-pihole-admin.conf
. In there you can see what's the base url to be used and other redirects it has. You just need to remember to check this file each time there's an update, since it warns you it can be overwritten by that process.
Are you sure your IP is only used by you?
AFAIK ISPs usually bundle the traffic of users to a few public IP addresses, so maybe the things you see are just someone else in your area going out from the same IP your ISP provides.
But I'm not actually sure if this is how it works, I might be wrong.
Just did the upgrade. Only went and copied the docker folder for the volume.
# docker inspect immich_pgdata | jq -r ".[0].Mountpoint" /var/lib/docker/volumes/immich_pgdata/_data
Inside that folder were all the DB files, so I just copied that into the new folder I created for ./postgres
I thought there would be issues with the file permissions, but not, everything went smoothly and I can't see any data loss.
(This even was a migration from 1.94 to 1.102, so I also did the pgvecto-rs upgrade)
I've been using https://github.com/mbnuqw/sidebery
It also suggests you a way to hide the top bar, it can be dynamic or permanent depending on how you configure your userChrome.css
It provides you a way set up snapshots, although I haven't tested the restore functionality hehe.
I'm not sure how you can export them and back them up.
The one I know works for restoring your tabs is https://github.com/sienori/Tab-Session-Manager
But if you use sidebery to have your trees, panels, and groups, this one won't restore them, you'll get back on long list of tabs in a single panel, with no groups or trees.
I already had to restore a session with this one because I changed computer.
It has a way to backup your sessions in the cloud.
You can use GPSLogger to record it in local or send it to whatever service you want.
If you're into selfhosting you can use traccar which is focused into fleet management so it's easy to get reports on the trips made.
As for your second point, I wouldn't trust the GPS for this, it can say you weren't moving since it only checks every so often to record the data, or maybe it says you actually were speeding because the two points it used to calculate the data weren't the actual points you were at that time.
A dashcam would be better suited for this. I'm not sure how they work, but most probably they can be connected to read data from your car which would be more trust worthy to whoever might decide if you were actually speeding.
I've been using https://kolaente.dev/vikunja/vikunja
It has options for sharing and assigning people to a task, but I only use it for personal stuff so haven't checked properly those features.
I'm not sure how the integration experience would be since I'm not familiar with calDAV.
What's the feed you have issues with?
Or is it with all feeds?
None of my feeds have a read remaining paragraphs
to expand the article in the FreshRSS UI.
As I mentioned, this one sends the full article https://www.404media.co/rss/
And this one has a partial article with a link to open the page in the site http://feeds.arstechnica.com/arstechnica/index/
I wish for it be like this for all the articles as some of the articles that load in full I’m not always interested in, and end up having to scroll through the whole thing
To skip to the next article you can configure the shortcuts native to FreshRSS, I think the default ones are h
for the next unread article and k
for the previous article. (I think these are the defaults because I haven't changed them and I see these in my config screen)
For mobile I'm using the touch control extension in here https://github.com/langfeld/FreshRSS-extensions
I'm not sure what you mean by articles not loading properly.
I haven't had any issues with FreshRSS' UI showing all the data.
Have you checked the feed sends all the article in it?
For example ars' feed sends a few paragraphs and includes a link at the end with Read the remaining X paragraphs
404media's does send all the article content in their feed.
9to5google's only send you a single line from the article!!
So, it depends on what you need.
If you want to see the full content probably you need an extension which either curls the link of each item in the feed and replaces the content received by the feed with the one received bu the curl, or one which includes an iframe with the link so the browser loads it for you.
IIRC there are two youtube extensions which do something similar to change the links for invidious or piped, one replaces the content with the links, and the other adds a new element to load the video directly in the feed.
Bots on Lemmy are allowed, that's why the API exists.
Bots on programming.dev seems are not allowed since all endpoints require to pass CF.