TIL you can extract energy from space time itself & working on that same principle, make black hole bombs
hendrik @ hendrik @palaver.p3x.de Posts 8Comments 1,827Joined 4 yr. ago
It's difficult to make that decision. I mainly tinker with generative AI. Try a chatbot or agent or try creative writing with it, mess with FramePack or LTX-video, or explore some text to speech or whatever I find interesting when I got some time to spare.
Obviously I wouldn't spend a lot of money just to mess around. So yeah. I currently just rent cloud GPUs by the hour and I'm fine with that. Once we get a very affordable and nice AI card with lots if fast VRAM, I think I'm going to buy one. But I'm really not sure if this one or any previous generation Radeon is what I'm looking for. Or me spending quite some time on ebay to find some old 30x0. And as a Linux user I'm not really fond of the Nvidia drivers, so that also doesn't make it easier.
Hmm. I could buy a (new) Radeon 7600 XT right now for around 330€... that should be only slightly more than $300 plus VAT, and that also has 16GB of VRAM and a similar (slightly faster?) memory interface?
Thank you very much for the correct information. I googled it and took the number from some random pc news page. Either they got it wrong or I might need new glasses. Nonetheless, 128 or 192-bit is what Intel has on their website. I wish they'd do more for cards with 16GB of VRAM or more. I think two hundred and something GB/s is about what Nvidia, AMD and everyone already did in their previous generation of graphics cards.
Yeah sure. We just have to build a spaceship, travel a few thousand lightyears, then accelerate the black hole into the desired direction, probably with an amount if energy that also requires us to build a dyson sphere around the next star. Then wait some ten thousands of years, decelerate it, build the energy harvesting mechanism around it, and boom, enemy obliterated.
I really feel it could be more efficient to just make some antimatter out of pure energy, here on earth, and just mail it to them in a letter bomb.
I mean producing antimatter is ridiculous, too. But so is obtaining a black hole. And I don't get why you wouldn't then directly use it as is, or use the energy you wasted on the project directly. But instead harvest a tiny amount of a black hole's energy to convert specifically that into a weapon... And what kind of application needs "bombs" which are fine to arrive in a few hundred thousands of years or so?
Yeah, and where do you get a black hole that happens to have the appropriate size?
I'm not sure about that. I think OP wants something like ATA secure erase. That would be hdparm
and a bunch of options, and not blkdiscard. Unless they specifically know what they're doing and what options to pick. And what the controller will do in return.
I don't think it's as easy as that. The developers hold that resentment. But that doesn't mean it translates to the users. Also Lemmy as we know it today has be very much shaped by the Reddit exodus. So even if it had been marxist at some point (which I'd argue it always played a minor role), that's long gone. And I don't think this is even one of the main issues as of today.
I'd support that. I mean I'm very okay with the anti-capitalist comments. But I agree that we participate way toi much in the rage-baiting, emotional news articles if the day, generally re-posting all the news and memes we got from the newsfeeds, Facebook and Reddit. That's all not very original. And not very useful to me either. I'd rather have a genuine conversation. Preferrably about things I like, so hobbies etc.
Why not simply use an antimatter bomb? Wouldn't that have way more explosive power?
I'd watch that. Maybe add another level of innuendo and add Jack Black and Dwayne Johnson to the cast.
Yes, thanks. Just invalidating or trimming the memory doesn't cut it. OP wants it erased so it needs to be one of the proper erase commands. I think blkdiscard also has flags for that, so I believe you could do it with that command as well, if it's supported by the device and you append the correct options. (zero, secure) I think other commands are easier to use (if supported).
I think there are (have been?) tools for this like Remastersys, Pinguy Builder, Respin... Though a lot of them don't seem to be active any more.
Nice. I always first look at the memory bandwidth if it's about AI. And seems with a 224bit bus, they've done a better job than previous cards you'd find in that price segment.
Maybe have a look at https://nginxproxymanager.com/ as well. I don't know how difficult it is to install since I never used it, but I heard it has a relatively straight-forward graphical interface.
Configuring good old plain nginx isn't super complicated. It depends a bit on your specific setup, though. Generally, you'd put config files into /etc/nginx/sites-available/servicexyz
(or put it in the default
)
server { listen 80; server_name jellyfin.yourdomain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name jellyfin.yourdomain.com; ssl_certificate /etc/ssl/certs/your_ssl_certificate.crt; ssl_certificate_key /etc/ssl/private/your_private_key.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; location / { proxy_pass http://127.0.0.1:8096/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } access_log /var/log/nginx/jellyfin.yourdomain_access.log; error_log /var/log/nginx/jellyfin.yourdomain_error.log; }
It's a bit tricky to search for tutorials these days... I got that from: https://linuxconfig.org/setting-up-nginx-reverse-proxy-server-on-debian-linux
Jellyfin would then take all requests addressed at jellyfin.yourdomain.com and forward that to your Jellyfin which hopefully runs on port 8096. You'd use a similar file like this for each service, just adapt them to the internal port and domain.
You can also have all of this on a single domain (and not sub-domains). That'd be the difference between "jellyfin.yourdomain.com" and "yourdomain.com/jellyfin". That's accomplished with one file with a single "server" block in it, but make it several "location" blocks within, like location /jellyfin
Alright, now that I wrote it down, it certainly requires some knowledge. If that's too much and all the other people here recommend Caddy, maybe have a look at that as well. It seems to be packaged in Debian, too.
Edit: Oh yes, and you probably want to set up Letsencrypt so you connect securely to your services. The reverse proxy would be responsible for encryption.
Edit2: And many projects have descriptions in their documentation. Jellyfin has documentation on some major reverse proxies: https://jellyfin.org/docs/general/post-install/networking/advanced/nginx
You'd install one reverse proxy only and make that forward to the individual services. Popular choices include nginx, Caddy and Traefik. I always try to rely on packages from the repository. They're maintained by your distribution and tied into your system. You might want to take a different approach if you use containers, though. I mean if you run everything in Docker, you might want to do the reverse proxy in Docker as well.
That one reverse proxy would get port 443 and 80. All services like Jellyfin, Immich... get random higher ports and your reverse proxy internally connects (and forwards) to those random ports. That's the point of a reverse proxy, to make multiple distinct services available via just one and the same port.
Right. Do your testing. Nothing here is black and white only. And everyone has different requirements, and it's also hard to get own requirements right.
Plus they even change over time. I've used Debian before with all the services configured myself, moved to YunoHost, to Docker containers, to NixOS, partially back to YunoHost over the time... It all depends on what you're trying to accomplish, how much time you got to spare, what level of customizability you need... It's all there for a reason. And there isn't a perfect solution. At least in my opinion.
I think Alpine has a release cycle of 6 months. So it should be a better option if you want software from 6 months ago packaged and available. Debian does something like 2 years(?) so naturally it might have very old versions of software. On the flipside you don't need to put in a lot of effort for 2 years.
I don't think there is such a thing as a "standard" when it comes to Linux software. I mean Podman is developed by Red Hat. And Red Hat also does Fedora. But we're not Apple here with a tight ecosystem. It's likely going to run on a plethora of other Linux distros as well. And it's not going to run better or worse just because of the company who made it....
Sure. I think we could construe an argument for both sides here. You're looking for something stable and rock solid, which doesn't break your stuff. I'd argue Debian does exactly that. It has long release cycles and doesn't give you any big Podman update, so you don't have to deal with a major release update. That's kind of what you wanted. But at the same time you want the opposite of that, too. That's just not something Debian can do.
It's going to get better, though. With software that had been moving fast (like Podman?) you're going to experience that. But the major changes are going to slow down while the project matures, and we'll get Debian Trixie soon (which is already in hard freeze as of now) and that comes with Podman 5.4.2. It'll be less of an issue in the future. At least with that package.
Question remains: Are you going to handle updates of your containers and base system better than, or worse than Debian... If you don't handle security updates of the containers in a timely manner for all time to come, you might be off worse. If you keep at it, you'll experience some benefits. Updates are now in your hands, with both downsides and benefits... You should be fine, though. Most projects do an alright job with their containers published on Docker Hub.
I don't think so. I've also started small. There are entire operating systems like YunoHost who forgo containers. All the packages in Debian are laid out to work like that. It's really not an issue by any means.
And I'd say it's questionable whether the benefits of containers apply to your situation. If you for example have a reverse proxy and do authentication there, all people need to do is break that single container and they'll be granted access to all other containers behind that as well... If you mess up your database connection, it doesn't really matter if it runs in a container or a user account / namespace. The "hacker" will gain access to all the data stored there in both cases. I really think a lot of the complexity and places to mess up are a level higher, and not something you'd tackle with your container approach. You still need the background knowledge. And containers help you with other things, less so with this.
I don't want to talk you out of using containers. They do isolate stuff. And they're easy to use. There isn't really a downside. I just think your claim doesn't hold up, because it's too general. You just can't say it that way.
Or build a death star. I suppose that skips one conversion step, sucks energy from something and blasts this directly as energy in a death ray. I guess that would do for most cases a bomb works. (And it's kind of similar to the idea here, minus the black hole and spin energy.)