Hey, I didn't realize I was so good at self-treatment!
mindbleach @ mindbleach @lemmy.world Posts 0Comments 242Joined 2 yr. ago
Gene Amdahl himself was arguing hardware. It was never about writing better software - that's the lesson we've clawed out of it, after generations of reinforcing harmful biases against parallelism.
Telling people a billion cores won't solve their problem is bad, actually.
Human beings by default think going faster means making each step faster. How you explain that's wrong is so much more important than explaining that it's wrong. This approach inevitably leads to saying 'see, parallelism is a bottleneck.' If all they hear is that another ten slow cores won't help but one faster core would - they're lost.
That's how we got needless decades of doggedly linear hardware and software. Operating systems that struggled to count to two whole cores. Games that monopolized one core, did audio on another, and left your other six untouched. We still lionize cycle-juggling maniacs like John Carmack and every Atari programmer. The trap people fall into is seeing a modern GPU and wondering how they can sort their flat-shaded triangles sooner.
What you need to teach them, what they need to learn, is that the purpose of having a billion cores isn't to do one thing faster, it's to do everything at once. Talking about the linear speed of the whole program is the whole problem.
8-bit machines didn't stop dead at 256 bytes of memory. Address length and bus width are completely independent. 1970s machines were often built with bit-slice memory, with however many bits of addressing, and one-bit output. If you wanted 8-bit memory then you'd wire eight chips in parallel - with the same address lines. Each chip would deliver a different part of the same logical byte.
64-bit math doesn't need 64-bit hardware, either. Turing completeness says any computer can run the same code - memory and time allowing. As an object example, Javascript exclusively used 64-bit double floats, even when it was defined in the late 1990s, and ran exclusively on 32-bit machines.
Mostly heat. Every gate destroys information, which is kinda the definition of entropy, so it necessarily generates heat. There's goofy plans for "reversible computing" that swap bits - so true is 10 and false is 01 - and those should only produce heat through the resistance in the wires. (I personally suspect you'd have to shuttle data elsewhere and destroy it anyway. That'd be off-chip, so it could be arbitrarily large, instead of concentrating hundreds of watts in a thumbnail of silicon. But you'd still have a motherboard with a north bridge, a south bridge, and a woodshed.)
The other change that'd make wider lanes less egregious is 3D chip design. We're pretty far from 2D, already. There's dozens of layers of stuff going on in any complex microchip. AMD's even stacking a couple naked dies on top of one another for higher memory bandwidth. But what'd be transformative is the ability to fold any square layout into a cube, with as much fine detail vertically as it has horizontally. 256-bit data paths could be 16 traces wide and tall. Some could have no presence at all, because the destination is simply atop the source, and connected by a bunch of 10nm diagonals.
But aside from the design and manufacturing complexity of that added dimension, current technology would briefly turn that cube into an incandescent lightbulb. The magic smoke would escape with unprecedented efficiency.
The PS3 had a 128-bit CPU. Sort of. "Altivec" vector processing could split each 128-bit word into several values and operate on them simultaneously. So for example if you wanted to do 3D transformations using 32-bit numbers, you could do four of them at once, as easily as one. It doesn't make doing one any faster.
Vector processing is present in nearly every modern CPU, though. Intel's had it since the late 90s with MMX and SSE. Those just had to load registers 32 bits at a time before performing each same-instrunction-multiple-data operation.
The benefit of increasing bit depth is that you can move that data in parallel.
The downside of increasing bit depth is that you have to move that data in parallel.
To move a 32-bit number between places in a single clock cycle, you need 32 wires between two places. And you need them between any two places that will directly move a number. Routing all those wires takes up precious space inside a microchip. Indirect movement can simplify that diagram, but then each step requires a separate clock cycle. Which is fine - this is a tradeoff every CPU has made for thirty-plus years, as "pipelining." Instead of doing a whole operation all-at-once, or holding back the program while each instruction is being cranked out over several cycles, instructions get broken down into stages according to which internal components they need. The processor becomes a chain of steps: decode instruction, fetch data, do math, write result. CPUs can often "retire" one instruction per cycle, even if instructions take many cycles from beginning to end.
To move a 128-bit number between places in a single clock cycle, you need an obscene amount of space. Each lane is four times as wide and still has to go between all the same places. This is why 1990s consoles and graphics cards might advertise 256-bit interconnects between specific components, even for mundane 32-bit machines. They were speeding up one particular spot where a whole bunch of data went a very short distance between a few specific places.
Modern video cards no doubt have similar shortcuts, but that's no longer the primary way the perform ridiculous quantities of work. Mostly they wait.
CPUs are linear. CPU design has sunk eleventeen hojillion dollars into getting instructions into and out of the processor, as soon as possible. They'll pre-emptively read from slow memory into layers of progressively faster memory deeper inside the microchip. Having to fetch some random address means delaying things for agonizing microseconds with nothing to do. That focus on straight-line speed was synonymous with performance, long after clock rates hit the gigahertz barrier. There's this Computer Science 101 concept called Amdahl's Law that was taught wrong as a result of this - people insisted 'more processors won't work faster,' when what it said was, 'more processors do more work.'
Video cards wait better. They have wide lanes where they can afford to, especially in one fat pipe to the processor, but to my knowledge they're fairly conservative on the inside. They don't have hideously-complex processors with layers of exotic cache memory. If they need something that'll take an entire millionth of a second to go fetch, they'll start that, and then do something else. When another task stalls, they'll get back to the other one, and hey look the fetch completed. 3D rendering is fast because it barely matters what order things happen in. Each pixel tends to be independent, at least within groups of a couple hundred to a couple million, for any part of a scene. So instead of one ultra-wide high-speed data-shredder, ready to handle one continuous thread of whatever the hell a program needs next, there's a bunch of mundane grinders being fed by hoppers full of largely-similar tasks. It'll all get done eventually. Adding more hardware won't do any single thing faster, but it'll distribute the workload.
Video cards have recently been pushing the ability to go back to 16-bit operations. It lets them do more things per second. Parallelism has finally won, and increased bit depth is mostly an obstacle to that.
So what 128-bit computing would look like is probably one core on a many-core chip. Like how Intel does mobile designs, with one fat full-featured dual-thread linear shredder, and a whole bunch of dinky little power-efficient task-grinders. Or... like a Sony console with a boring PowerPC chip glued to some wild multi-phase vector processor. A CPU that they advertised as a private supercomputer. A machine I wrote code for during a college course on machine vision. And it also plays Uncharted.
The PS3 was originally intended to ship without a GPU. That's part of its infamous launch price. They wanted a software-rendering beast, built on the Altivec unit's impressive-sounding parallelism. This would have been a great idea back when TVs were all 480p and games came out on one platform. As HDTVs and middleware engines took off... it probably would have killed the PlayStation brand. But in context, it was a goofy path toward exactly what we're doing now - with video cards you can program to work however you like. They're just parallel devices pretending to act linear, rather than they other way around.
Hewlett-Packard is just an unhinged ad campaign for Brother.
He went to prison and his co-conspirator let him out.
We're not okay.
The answer in this case is neither.
But you'll notice a deafening absence of criticism from the right when The Idiot's children did all this and much worse, and vanishingly little liberal defense of Hunter.
There is no great Democratic hypocrisy on this subject. He was going to plead guilty and be sentenced and nobody really objected. What he did was undesirable and gross and apparently over-the-line, but it's not much of a scandal even on pre-Idiot levels. (Five or six years of constantly going "holy shit what the fuck is going on?" has tilted that scale to where this barely registers.)
On the other hand - Republicans suddenly feigning deep concern about nepotism and propriety is so two-faced that it's not even funny. Idiot Junior, whats-her-face, and Eric were only outdone in classical naked corruption by The Idiot himself, because he's too stupid to know why people act coy about it. He pardoned Rod Blagojevich. His children aren't screaming narcissists, so their efforts were slightly more subtle, while still including overt criminal behavior.
Of course not. They're saying it's okay when they do it.
This is not a snarky reply. This is a fact we need to stop treating as obscure or questionable. Stop pretending their bullshit exists in any worldview where fairness applies. Don't even do it rhetorically. It is an obstacle to internalizing how conservatives think.
It's simple and it's nonsense and it's terrifying.
Because of all the things they did that are crimes, that's not one of them.
I wish it was that low.
Any solid-state media you can access is almost certainly NAND. There's a second kind of flash memory called NOR, but it's gradually disappearing. I think it's relegated to EEPROMs and similar embedded uses. The number of applications where its advantages matter are outweighed by the seventeen bajillion dollar market for higher-capacity NAND. All the research money and foundry tech are going toward the one that'll let them sell 1 TB SSDs for $20.
Dynamic RAM is a bucket with a hole in it. Genuinely, that is the model that makes it so cheap.
Static RAM is the proper way to do memory: half a dozen transistors form each bistable flip-flop, so there's two input wires and one output wire, and the output wire is either high or low depending on which input wire was used most recently. Static RAM will maintain its state using comically low power. Static RAM runs on the idea of electricity. It's how cartridge games from the 90s had save files. There's a button-cell battery that was enough to power some kilobits of memory for an entire decade. But because static RAM uses so many gates, it takes up a lot of silicon, and it is hideously expensive, to this day.
Dynamic RAM is a stupid engineering workaround that happens to be really, really effective. Each bit is a capacitor. That's all. It will slowly drain, which is why your laptop has to hibernate to disk instead of lasting forever like Pokemon Red. When a capacitor has charge, applying more power is met with resistance, which lets the sole input wire detect the state of that bit. And so long as you check every couple milliseconds, and refill capacitors that are partially charged... the state of memory is maintained. On very old machines this might have been done by the machine. IIRC, on SNES, there's a detectable stall in the middle of each scanline, where some ASIC reads and then writes a portion of system memory. On modern devices that's all handled inside the memory die itself. The stall is still there, but if it affects your program, you are doing something silly.
The RAM in your machine has nigh-unlimited write cycles because it will naturally return to zero. It is impermanent on the scale of microseconds. By design, your data has no lasting impact. That is central to its mechanism.
Do y'all know your phone has a browser?
And there are no mysteries here. We know he did it. We saw him do it, in some cases. We have known for years about the fake-elector scheme. We've all seen photos of the documents at Mar-a-Lago. We've heard him, on tape, begging for state officials to betray our democracy.
If the law is allowed to matter, these people are simply fucked.
If.
That's the dream.
In practice, very no.
For example... the actions that dolt was specifically poo-poo-ing?
They treat that single dead loon as if she was shot in her own apartment, and not crawling over the final barricade between an angry mob and the entire elected federal government.
Well. The entire elected federal government, except for one particular bastard.
Ignoring the obvious - provocation and false flags are polar opposites, you dunderfuck.
Agent provocateurs go into sincere crowds and start shit to escalate what they're already doing. (Or to justify violence against the crowd, because "someone" threw a rock.) It's an agent, singular. An opposing individual in disguise shapes perception.
False-flag attacks are done by an opposing group. And they do the thing, themselves. They're not goading a crowd into escalating, or inviting violence from undisguised opposition. Sincere individuals are completely optional. The action is performed entirely by people acting under someone else's banner. You know. Like the name.
If if was a false flag, you'd want these assholes in jail as badly as we do.
Reality is a team sport, to some people.
What's true today is handed down by people above you, and if they belong above you, they must be better than you. In every way. There's only the one metric. People in this mindset don't understand expertise as anything more than sounding real smart.
Someone clever can't just be wrong, since that would require an objective means to evaluate claims. That is not what claims are for, in this tribal worldview. They can only be accepted or rejected based on interpersonal trust.
And they think that's all you're doing, because they think that's all there is.
This is why every argument you have with these people is frustrating nonsense. They don't have a stable set of beliefs you can interact with. They have slogans. They will freely shuffle through them to win the conversation by finding plausible excuses. Consistency means less than nothing. It's an obstacle to making good moves in this stupid word game.
This is what defines conservatism. This is all there is to conservatism. It's the Oops All Hierarchy worldview. There is no other force in that moral universe. They can claim high-minded ideals, but they'll flit between contradictory bullshit with zero self-recognition. So either they're somehow champions of individual rights while viciously enforcing tradition and also the Confederate-loving party of Lincoln and also free-speech-loving censorship nannies and also stalwart capitalist bailout-sucking union-busters or else they're just fucking lying. They're just... shuffling cards.
The only reliable predictor of what they'll say is whether they're talking about the ingroup or the outgroup. They will make whatever mouth noises protect and elevate the ingroup. They will make whatever mouth noises attack and denigrate the outgroup.
I understand why they don't see this division. What the hell is our excuse?
Centurii-chan!