A game you "didn't know it was bad 'til people told you so"?
Excrubulent @ Excrubulent @slrpnk.net Posts 8Comments 2,232Joined 2 yr. ago

Okay, that's all very interesting and I love the idea about dynamic music, I've had similar thoughts myself but wouldn't have thought to go this far to make it happen. I'd love to see what you come up with!
My only real thoughts are about the transpiling, so the editor uses relative time codes but the format itself uses absolute, if I understand you, and you're converting between the two?
That to me hints of code smell, because I wonder why that's necessary. For example, could you program the editor to display and work in absolute time codes, or is there something stopping that from happening?
Alternatively you could simply make the format capable of natively understanding both relative and absolute commands, so whichever is more appropriate to the context is what gets used.
Keeping them different seems like it will require you to program two formats, make them compatible with one another and deal with bugs in both of them. Essentially you've not only doubled the number of places where bugs can arise within the formats, you've added the extra step of transpiling which also doubles the number of interactions between the formats, adding even more complexity, even more places where inconsistencies can show up, even more code to sift through to find the problem.
It's the sort of thing that shows up in legacy systems where the programmers don't have the freedom to simply ditch one of the parts.
Personally if I had the freedom of programming the system from scratch I would rather commit completely to a single format and make it work across the entire stack, so then I only have one interpreter/encoder to consider. That one parser would then be the single point of reference for every interaction with the format. Any code that wants to get or place a note for any reason - for playing, editing, recording, whatever - would use the same set of functions, and then you automatically get consistency across all of it.
Edit: another thought about this: if you need some notes to be absolute and others to be relative, it might be worth having an absolute anchor command that other commands can be relative to, and have it indexed, so commands are relative to anchor 1, 2, etc. Maybe anchor 0 is just the start of the song. Also maybe you could set any command as an anchor by referring to its index. That way you can still move around those commands in a relative way while still having the overall format reducible to absolute times during playback. Also a note "duration" could just be an off command set relative to its corresponding on command.
I say that because as another principle I like to make sure that I "name things what they are". If the user is programming things in the editor that are relative, but under the hood they're translated into absolute terms, that will probably lead to unexpected behaviour.
It's neoliberal politics. Basically after WWII it was obvious people didn't like fascism and politicians couldn't openly embrace it. But it was too useful for protecting capitalist interests, so a bunch of neoliberal experiments were run in south america to figure out the best way to use fascism to oppress workers without creating that world-war style blowback.
And one of the techniques they landed on was to keep scapegoating the vulnerable, but to use sanitised language. So it's not "dirty n-----s, g-----s and k----s polluting our precious blood and soil", it's "immigrants taking our jobs". It's not "useless eaters withering the soul of our nation" it's "welfare recipients mustn't be allowed to freeload."
It's the same ideas dressed up to sound a bit more respectable and not trip the fascism alarm, but they work nearly as well to strip the social safety net, which lowers wages.
The argument could be made that because the image generator is essentially a regurgitator with no artistic interpretation, there is no transformative artistic value in it. It's like applying a filter with extra steps.
Also the generators charge for access, so they are profiting off of the IP. That's quite different to making something for personal use or releasing it for free.
Honestly a lot of this post is very inside-baseball with a lot of lingo, and the last paragraph is very dense, so it's hard to know what you mean, especially by the term "transpiler". What is it transpiling to & from, and where does this happen in the overall process of implementing the editor?
I'm sorry I don't have a lot of insight other than: it sounds like you know better than anyone here, so just try it and see what works. Sometimes rewriting a system is unavoidable as you figure out the logic of it.
Also as someone with some interest in programming my own physical MIDI instruments, I'd be interested to hear what limitations of MIDI you're talking about and what your system does differently. It sounds like you've got a pretty advanced use-case if MIDI isn't up to the task.
We don't have the same problems LLMs have.
LLMs have zero fidelity. They have no - none - zero - model of the world to compare their output to.
Humans have biases and problems in our thinking, sure, but we're capable of at least making corrections and working with meaning in context. We can recognise our model of the world and how it relates to the things we are saying.
LLMs cannot do that job, at all, and they won't be able to until they have a model of the world. A model of the world would necessarily include themselves, which is self-awareness, which is AGI. That's a meaning-understander. Developing a world model is the same problem as consciousness.
What I'm saying is that you cannot develop fidelity at all without AGI, so no, LLMs don't have the same problems we do. That is an entirely different class of problem.
Some moon rockets fail, but they don't have that in common with moon cannons. One of those can in theory achieve a moon landing and the other cannot, ever, in any iteration.
If all you're saying is that neural networks could develop consciousness one day, sure, and nothing I said contradicts that. Our brains are neural networks, so it stands to reason they could do what our brains can do. But the technical hurdles are huge.
You need at least two things to get there:
- Enough computing power to support it.
- Insight into how consciousness is structured.
1 is hard because a single brain alone is about as powerful as a significant chunk of worldwide computing, the gulf between our current power and what we would need is about... 100% of what we would need. We are so woefully under resourced for that. You also need to solve how to power the computers without cooking the planet, which is not something we're even close to solving currently.
2 means that we can't just throw more power or training at the problem. Modern NN modules have an underlying theory that makes them work. They're essentially statistical curve-fitting machines. We don't currently have a good theoretical model that would allow us to structure the NN to create a consciousness. It's not even on the horizon yet.
Those are two enormous hurdles. I think saying modern NN design can create consciousness is like Jules Verne in 1867 saying we can get to the Moon with a cannon because of "what progress artillery science has made in the last few years".
Moon rockets are essentially artillery science in many ways, yes, but Jules Verne was still a century away in terms of supporting technologies, raw power, and essential insights into how to do it.
GodFUCKINGdamnit please don't remind me how easily I trust random commenters to report information.
At this point even if I click on it there's no guarantee one of you fuckers hasn't vandalised the page.
You're definitely overselling how AI works and underselling how human brains work here, but there is a kernel of truth to what you're saying.
Neural networks are a biomimicry technology. They explicitly work by mimicking how our own neurons work, and surprise surprise, they create eerily humanlike responses.
The thing is, LLMs don't have anything close to reasoning the way human brains reason. We are actually capable of understanding and creating meaning, LLMs are not.
So how are they human-like? Our brains are made up of many subsystems, each doing extremely focussed, specific tasks.
We have so many, including sound recognition, speech recognition, language recognition. Then on the flipside we have language planning, then speech planning and motor centres dedicated to creating the speech sounds we've planned to make. The first three get sound into your brain and turn it into ideas, the last three take ideas and turn them into speech.
We have made neural network versions of each of these systems, and even tied them together. An LLM is analogous to our brain's language planning centre. That's the part that decides how to put words in sequence.
That's why LLMs sound like us, they sequence words in a very similar way.
However, each of these subsystems in our brains can loop-back on themselves to check the output. I can get my language planner to say "mary sat on the hill", then loop that through my language recognition centre to see how my conscious brain likes it. My consciousness might notice that "the hill" is wrong, and request new words until it gets "a hill" which it believes is more fitting. It might even notice that "mary" is the wrong name, and look for others, it might cycle through martha, marge, maths, maple, may, yes, that one. Okay, "may sat on a hill", then send that to the speech planning centres to eventually come out of my mouth.
Your brain does this so much you generally don't notice it happening.
In the 80s there was a craze around so called "automatic writing", which was essentially zoning out and just writing whatever popped into your head without editing. You'd get fragments of ideas and really strange things, often very emotionally charged, they seemed like they were coming from some mysterious place, maybe ghosts, demons, past lives, who knows? It was just our internal LLM being given free rein, but people got spooked into believing it was a real person, just like people think LLMs are people today.
In reality we have no idea how to even start constructing a consciousness. It's such a complex task and requires so much more linking and understanding than just a probabilistic connection between words. I wouldn't be surprised if we were more than a century away from AGI.
Some real "steel is heavier than feathers" energy coming off this teacher.
Not only that, the two statements in the premise are simply given. How is the child to know one of them is false? At that point, why not say Marty ate more than Luis and therefore the fractions must be different? Maybe the fractions are wrong and Luis ate more.
Just an absolutely terrible question if that's supposed to be the answer. I'd guess the teacher didn't write the question and didn't understand the answer.
I remember people talking about how the other smokers at work were all the cool people. And like, yeah, you spend several minutes several times a day hanging out outside with them, with no work and nothing to do but shoot the shit. Of course you like them better, you spend way more time with them.
Also you can all bond over your common terrible life choices, what's not to like?
Actually, factually, in truth, for real, fr, no joke, seriously, absolutely, in fact, truly, truthfully, really, in reality, literally, I couldn't tell you, because only you can figure out how to say what you want to say, and if you didn't already know that there are countless ways to say that, that makes me wonder if you actually care very much about the topic, for realsies, in actuality. All you need to do is spend a few seconds thinking of another way to say it and you can answer your own question.
It's language. Of course we have ways of saying things, that's what it's for. Also you can say "literally" to mean "actually" as long as you understand how to say it in context, and the fact that you can correct people who you believe are using it wrong is a sign that you can tell the difference and you don't need to correct them.
And if we don't have a way of saying something, you can invent one, because that's how language works. People who tell you that there's some authoritative measure by which we know what words mean don't actually care about language. They're trying to kill our language, because a living language can't be controlled like they want. The good news is that it's impossible to achieve that goal.
Like if you're not going around lamenting the fact that "terrific" doesn't mean "terrifying" anymore then maybe it's okay if words change. It sounds like you survived that particular tragedy.
"Literally" literally means "as written", or "in the literature".
To use the word "literally" to mean "in reality" or "in fact" is not that original meaning, but is literally - in fact, as well as a written thing - a figurative meaning.
Language changes. There are plenty of words that are their own antonyms. It's not sad, it's inevitable, and the sooner you can accept that the sooner you can avert the fate of becoming an old man yelling at clouds.
Little Brother is a novel about a future dystopia where copyright laws have been allowed free rein to destroy people's lives.
It's legislated that only "secure" hardware is allowed, but hardware is by definition fixed, which means that every time a vulnerability is found - which is inevitable - there is a hardware recall. So the black market is full of hardware which is proven to have jailbreaking vulnerabilities.
Just a glimpse of where all this "trusted", "secure" computing might lead.
As a short video I saw many years ago explained on the concept: "trust always depends on mutuality, and they already decided not to trust you, so why should you trust them?"
Edit: holy shit, it's 15 years old, and "anti rrusted computing video dutch voice over" (turns out the guy is German actually) was enough to find it:
Assume, he says, that the distribution of holdings in a given society is just according to some theory based on patterns or historical circumstances—e.g., the egalitarian theory, according to which only a strictly equal distribution of holdings is just.
Okay well this is immediately a false premise because nobody seriously makes this argument. This is a strawman of the notion of egalitarianism.
Also, we don't need Wilt Chamberlain to create an unequal society, we just need money. It's easy enough to show that simply keeping an account of wealth and then randomly shuffling money around creates the unequal distribution that we see in the real world:
https://charlie-xiao.github.io/assets/pdf/projects/inequality-process-simulation.pdf
And every actor there began with the impossible strictly eqalitarian beginning. No actor was privileged in any way nor had any merit whatsoever, but some wound up on top of an extremely unequal system.
So Noszick just needs to look a little deeper at his own economic system to see the problem. There is no reason why we need to have a strict numerical accounting of wealth.
Also though these are RC size, 5mm screws, so much easier to kill. Apparently the issue is most hex drivers are slightly undersized, and ARRMA like to loctite their axle grub screws to hell.
I've done it. It was a grub screw - so the hex was entirely within the shaft - that was surrounded by loctite, and frankly I never had a chance to get it out. It went circular immediately, just with hand pressure. I ended up having to use a screw extractor.
I was told this was a common problem on ARRMA vehicles and that I should get a more precise type of hex driver. They were expensive but I haven't had the problem since.
Your funeral, don't say I didn't warn you.
Your gnomes shouldn't be dead, they're technically immortal and a stint in the dishwasher is their ticket out of the salt mines. If you've got dead gnomes the last thing you want is to keep their bodies on the premises. If you leave them in the cartridge they can be revived when you exchange it for the new cartridge. If you put them in the ground they will find... other ways back to their realm, and they will remember what you did.
And please remember to buy gnomane dishwashing tablets, I cannot stress enough how much they should not be dead.
Also don't ask me why the gnome salt mine slavery exists, I didn't create it, I just benefit from it.
I played the first game and thought it was okay but not great. What were the changes? Maybe they'll suit me since I'm not so attached to the original.