Discord file links will expire after a day to fight malware
computergeek125 @ computergeek125 @lemmy.world Posts 0Comments 255Joined 2 yr. ago
Call me old fashioned but I'd rather see high native quality available for when it is relevant. If I'm watching gameplay footage (as one example) I would look at the render quality.
With more and more video games already trying to use frame generation and upscaling within the engine, at what point is too much data loss? Depending on upscaling again during playback means that you video experience might depend on which vendor you have - for example, an Nvidia computer may upscale differently from an Intel laptop with no DGPU vs an Android running on 15% battery.
That would become even more prominent if you're evaluating how different upscaling technologies look in a given video game, perhaps with an intent to buy different hardware. I check in on how different hardware encoders keep up with each other with a similar research method. That's a problem that native high resolution video doesn't have.
I recognize this is one example and that there is content where quality isn't paramount and frame gen and upscaling are relevant - but I'm not ready to throw out an entire sector of media for this kind of gain on some media. Not to mention that not everyone is going to have access to the kind of hardware required to cleanly upscale, and adding upscaling to everything (for everyone who's not using their PS5/Xbox/PC as a set top media player) is just going to drive up the cost of already very expensive consumer electronics and add yet another point of failure to a TV that didn't need to be smart to begin with.
There are a few verticals I could find but mostly horizontal. Also for some reason I can find two goblins and two orcs but singles of everything else. I have a suspicion the author planned that.
I can hear the accent XD
Which, now given the ability to inject arbitrary code, you could conceivably now write code to list every variable it had access to.
I think the funnier part of the meme is that the actual song they're playing is mission impossible
Instant gratification I guess?
One the other side I've seen a friend Xbox stream Starfied to PC Chrome, when I originally saw the title I was expecting something more like that from Sony
I guess we're not allowed to have nice things :(
Edit: instant not insurance, thanks autocorrect
That renewal price on Namecheap is $15
Most registrars give you an obnoxiously low first registration price so they can get you with the real fee (often $15-50+ per year) later.
Any time you see the discount price mark on Namecheap, expect to be paying at least the original price per year.
OT but how is your account from 400 years in the future
I use it to test that I've set up an authentication system correctly without a cookie bias, among other uses
Set everything up in main > confirm tests pass > log in from incognito with password vault to make sure the auto test didn't lie.
They could replace WORM storage, and since the person you responded to mentioned LTO, WORM may be possible with their data set since LTO is traditionally used for backups
I tend towards always putting heavy cantilever stuff on 4-post rails even if they're generic rails. Lighter 2-post stuff sags enough as it is.
I've got nothing against downloading things only once - I have a few dozens of VM at home. But once you reach a certain point maintaining offline ISOs for updating can become a chore, and larger ISOs take longer to write to flash install media by nature. Once you get a big enough network, homogenizing to a single distro can become problematic: some software just works better on certain distros.
I'll admit that I did miss the point of this post initially wondering why there was a post about downloading Debian when their website was pretty straightforward - the title caught me off guard and doesn't quite match what it really is on the inside. Inside is much much more involved than a simple download.
Therein lies the wrinkle: there's a wide spectrum of selfhosters on this community, everyone from people getting their first VM server online with a bit of scripted container magic, all the way to senior+ IT and software engineers who can write GUI front ends to make Linux a router. (source: skimming the community first page). For a lot of folks, re-downloading every time is an ok middle ground because it just works, and they're counting on the internet existing in general to remotely access their gear once it's deployed.
Not everyone is going to always pick the ""best"" or ""most efficient"" route every time because in my experience as a professional IT engineer, people tend towards the easy solution because it's straightforward. And from a security perspective, I'm just happy if people choose to update their servers regularly. I'd rather see them inefficient but secure than efficient and out of date every cycle.
At home, I use a personal package mirror for that. It has the benefit of also running periodic replications on schedule* to be available as a target that auto updates work from. Bit harder to set up than a single offline ISO, but once it's up it's fairly low maintenance. Off-hand, I think I keep around a few versions each of Ubuntu, Debian, Rocky, Alma, EPEL, Cygwin, Xen, and Proxmox. A representative set of most of my network where I have either three or more nodes of a given OS, or that OS is on a network where Internet access is blocked (such as my management network). vCenter serves as its own mirror for my ESXi hosts, and I use Gitea as a docker repo and CI/CD.
I also have a library of ISOs on an SMB share sorted by distro and architecture. These are generally the net install versions or the DVD versions that get the OS installed enough to use a package repo.
I've worked on full air gap systems before, and those can be just a chore in general. ISO update sometimes can be the best way, because everything else is blocked on the firewall.
*Before anyone corrects me, yes I am aware you can set up something similar to generate ISOs
You comment does assume a bit of context. They might be a new DM just learning how to do stuff, in which case it's perfect fine to not be perfect every session.
Heck, it's perfectly fine to not be perfect even for experienced DMs. The most important thing is that whatever happens both the players and the DM learned something.
Absolutely this. I learned so much on my homelab that at this point has more resiliency than some medium businesses (n+1 switching, vSAN for critical VMs, n+0.5 UPS for 15 minutes)
There are still tons of reasons to have redundant data paths down to the switch level.
At the enterprise level, we assume even the switch can fail. As an additional note, only some smart/managed switches (typically the ones with removable modules and cost in the five to six figures USD per chassis) can run a firmware upgrade without blocking networking traffic.
So from a failure case and switching during an upgrade procedure, you absolutely want two switches if that's your jam.
On my home system, I actually have four core switches: a Catalyst 3750X stack of two nodes for L3 and 1Gb/s switching, and then all my "fast stuff" is connected to a pair of ES-16-XG, each of which has a port channel of two 10G DACs back to to Catalyst stack, with one leg to each stack member.
To the point about NICs going bad - you're right its infrequent but can happen, especially with consumer hardware and not enterprise hardware. Also, at the 10G fiber level, though infrequent, you still see SFPs and DACs go bad at a higher rate than NICs
So that's the nifty thing about Unix is that stuff like this works- when you say "locked up", I'm assuming you refer to logging in to a graphical environment, like Gnome, KDE, XFCE, etc. To an extent, this can even apply to some heavy server processes: just replace most of the references to graphical with application access.
Even lightweight graphical environments can take a decent amount of muscle to run, or else they lag. Plus even at a low level, they have to constantly redraw the cursor as you move it around the screen.
SSH and plain terminals (Ctrl-Alt-F#, what number is which varies by distro) take almost no resources to run: SSH/Getty (which are already running), a quick process call to the password system, then a shell like bash or zsh. A singular GUI application may take more standing RAM at idle than this entire stack. Also, if you're out of disk space, the graphical stack may not be able to alive
So when you're limited on resources, be it either by low spec system or a resource exhaustion issue, it takes almost no overhead to have an extra shell running. So it can squeeze into a tiny corner of what's leftover on your resource-starved computer.
Additionally, from a user experience perspective, if you press a key and it takes a beat to show up, it doesn't feel as bad as if it had taken the same beat for your cursor redraw to occur (which also burns extra CPU cycles you may not be able to spare)
Absolutely can and will take action. Doesn't always kill the right process (sometimes it kills big database engines for the crime of existing), but usually gives me enough headroom to SSH back in and fix it myself.
Even better, you can swapoff
swap too!
There's nothing inherently wrong with having a backup software, but Microsoft has a terrible track record with every other "system component" that can push data to MS Cloud about making the software nag-ware to make you cave and buy more Microsoft products just to make the warnings go away, sometimes for an inferior product. See note at OneDrive, Cortana, Edge, and Bing just off the top of my head without doing any research.
So for me, I have several computers all protected by Synology backup. It goes to an appliance I own and control, not the cloud. This setup can be used to completely restore the entirety of a computer with the exception of firmware even if the main operating system is so fried automatic startup repair doesn't work.
But, in the past, despite having a 24 hour recovery point with this system (every night it backs up any data that changed since the previous backup, including core OS files), Windows backup would be default still nag me about setting it up. It wouldn't bother to even try to detect a third party backup tool in the same way that Defender does for third party security software. I had to run some specific setup options to make Windows backup go away (and I can't remember since it was some years ago, but it may have involved removing the component). By comparison on my older Mac, when I turned off Time Machine to use Synology backup, I think I got one warning about shutting it down then it didn't say anything else.
It's probably also way cheaper to do it that way. As far as I could tell when I checked in on it some time ago, most of the content goes through a Cloudflare proxy straight to a GCP S3-compatible bucket.