I think the next bit of performance may be leaning hard into QAT. We know there is a lot of wasted precision in models, so the more we understand that during training the better quality small quants can get.
I also think diffusion LLMs ability to change previous tokens is amazing. As well as the ability to iteratively use an auto regressive LLM to increase output quality.
I think a mix of QAT and iterative interference will bring the biggest upgrades to local use. It'll give you a smaller higher quality model thay you can decide to run for even longer for higher quality outputs.
You can't direguard anyone's humanity. Even billionaires. There are no universally bad people, negativity is always relational.
Though I do think you can weigh a billionaire's comfort against the folks they made billions from, and that may just be potent enough for the death penalty.
However, I don't think punishment is a humane solution. Rehabilitation and integration are always preferred. Though again, some folks integrate best as corpses.
I think I've been able to use secure boot with fedora. I didn't need to change any of my settings. But I build my PC myself, so there was never any windows specific config. I did dual boot at the start for a bit, but now I just have Linux on it.
Overall I doubt you need to change any BIOS settings. I'd just try to install and if you run into issues figure it out from there.
I misremembered my internet class. Sucks that it made ya feel bad.
Edit: and you can put whatever you want as your source IP at the IP level. Though idk how modern security deals with that. I know I was taught that that was a way to DoS attack, so I imagine it's protected against.
You can fake your IP. There isnt really any authentication at the IP level. Just make a packet and overwite the IP field.
Edit: I was corrected. The TCP handshake requires you to have a valid IP you can respond from. So even though you can fake your IP, you can't use that to talk to most websites.
I think you're projecting the deliberate choice part. It think a lot of folks can get reasonably caught up in their own lives and not look into things too deep. It's effort to overhaul your information intake. Lots of folks have very little effort left over after work, and Its is reasonable to assume nothing has convinced them that their news is bad.
I think its easier than ever to get the info, but that still doesn't mean its easy enough that everyone and their mom automatically knows what they should be paying attention to.
Making these things about personal failings feels very unproductive. There is a lot to focus on in life. It seems better to try and make the subject approachable and comfortable.
I designate all folks as good folks. Even with the whole 'every action is inherently selfish' worldview that I have. I think most anyone close to me, and anyone nearby with free time would rush me to the hospital.
Though, I think leaving me to die is fair and wouldn't make someone a bad person. I am only the center of my universe.
I'd imagine that that point of designating good and bad people is to decide where to put your effort. Who to try and support. Maybe to decide who to keep in your life. I'd say that can be done just fine without labeling folks as "bad people".
I worry folks will dehumanize and become a bit too negligent of the experiences of "bad people". "Bad people" just means "contradictory and offensive culture" in most cases.
I think the next bit of performance may be leaning hard into QAT. We know there is a lot of wasted precision in models, so the more we understand that during training the better quality small quants can get.
I also think diffusion LLMs ability to change previous tokens is amazing. As well as the ability to iteratively use an auto regressive LLM to increase output quality.
I think a mix of QAT and iterative interference will bring the biggest upgrades to local use. It'll give you a smaller higher quality model thay you can decide to run for even longer for higher quality outputs.