Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GI
Posts
1
Comments
787
Joined
2 yr. ago

  • I think you may be misinformed as to what a non-compete agreement is. For example, when I worked for leaf filter, I had to sign a non-compete agreement that stated I couldn't/wouldn't work in the gutter protection industry for 6-12 months after leaving their company. Was it too broad to enforce and just their to keep anybody with a working brain from taking their service and providing it for cheaper? Yes. Did it work, effectively driving down competition and allowing them to effectively pigeonhole the US market? Also yes.

  • The only thing that kept the apartheid gov in place was US and British support. There's a reason the Berlin wall was a big deal for SA. Similarly, Israel's current existence as a colonist state and the dominant power in the middle east is only possible because of daddy America. If not for our support (and emigration, many settlers are from the US) they would likely cave to international and internal pressures (like in Argentina, Chile, South Africa, and others)

    I hope that Joe can finally get the balls to do it. People are pissed off and he's going to lose the election if he doesn't do something, which is a shame as his government isn't great but it has been a lot better than many previous ones, and especially has done a lot concerning monopolies (insofar as taking them to court). Also though I don't want to live in a fascist shithole under Trump.

  • The ANC in South Africa was largely ineffective. Mandela specifically is a great example of how people can get much more done as moderates than violent radicals. He would never see any sort of true progress until after his imprisonment and subsequent putting down of arms.

  • Wow, amazing, industrialization lifts people out of poverty... never would have realized. The problem with China is that currently, akin to the later stages of the USSR, they're only redistributing the wealth insofar as that means reprivatizing and consolidating in new hands, not actually redistributing from the wealthy minority to the majority, but creating a new wealthy minority.

  • It really depends on who you're talking about. Indie game devs? Don't pirate. EA? The programmers/designers/writer/etc are getting paid the same amount either way, but there is still an argument to be made that if enough people were to pirate that it could cause layoffs in the future. higher amounts of piracy and higher sales are positively correlated tho so it's really tough to say.

    In no way should someone be upset with you for you not wanting to pirate tho. That's just ridonk

  • If you could read with a 5th grader's level of comprehension you would know by now. Feel free to go back. It's all there. Just put it together.

    Never said anything about a religious belief. People are open to hold them, even if irrational.

    I'm a literal scientist so I don't know how that could be possible. It seems like you're the one that has a problem with extrapolating erroneously from data.

    I will not be replying again.

  • Not an example of the unknowable. An example of knowing that something 'is not' without defining what 'is'.

    I have a dismissive attitude towards things like the Turing test because they're only empirical insofar as they empirically record a subjective opinion.

    Similarly, with the Othello study, my problem is not with their data, but what they attempt to extrapolate from it.

    In the same way that I can't define God, I can say with some certainty that you aren't it. Could I be wrong? Potentially, in an incredibly, incredibly unlikely scenario. Am I willing to take that risk? Yes... and Occams Razor supports such.

  • They were fascists too dumbass. That's the point. In the modern geopolitical sphere, the far right has more centrist support than the far left. Just like with Hitler, or in my example, Videla, etc. They promise security and propagandize, often threatening journalists and humanitarian aid workers like we see Israel doing.

    The moment they come to power again within the confines of the system and these laws are already in effect, it gives them the power to crack down on whatever form of 'extremism' they want to, now with legal precedent. The problem with fascists is that they are inherently bad actors in a democracy and people must be taught as much so as to avoid such a rise occurring again.

    Just like with the USA War on Drugs, the best thing to do is educate people, not try to 'protect' them like with DARE

    The problem is that its not the fascist leader alone, but the proliferation of fascist ideals and sympathies in the population that leads to fascist uprisings. The goal should be to educate about what those are and why they're bad and fallacious and provably wrong, not to play big brother like China.

  • What happens when people with the opposite political views (Bolsonaro, for example) come to power and label you the extremist that no longer is allowed access to a platform to spread your message?

    Just trying to point out that this is the same language and reasoning that, for example, the neofascist regime of Argentina from 1976-1984 used in order to pacify the public for the disappearance and torture of 30,000 people, mostly leftists.

    We can be better than them.

  • The legal system has nothing to do with understanding and everything to do with arbitrarily assigned human bullshit (just like the turing test). While law tends to be rational, it's notoriously shit as a way of understanding the universe. (Live in a fascist country? Well, the law's the law). I really regret trying to use that quote as an example because you've ratcheted onto it like a bulldog and simply can't let go.

    Science is the only way by which we can advance our understanding of the universe. There are cases of unknowable questions in which people use philosophy or religion to try and fill the gap, but they still never actually know, just think.

    That wasn't the exact study I was referencing, but it is actually better at explaining some of the related concepts both in analogy and in their discussion (a discussion in which, they admit that what they think their findings indicate and what their findings actually indicate could be two different things.)

    But, to conclude that somehow the multidimensional set of vectors is mapping the board out because when you change part of the input data, even counterfactual input data in which the computer hasn't seen that move before as it's illegal, the output data changes is another huge leap. Of course the data changes, as the patterns change, and the gpt has internalized the patterns in its training data, just as it internalizes syntax and rules of language.

    I don't think that it really has any meaningful impact if they were incorrect, but if they are correct it could mean that AI is somehow creating a representation of data within itself, which really also wouldn't surprise me.

    I guess I was more arguing against the guy trying to quote the study at me in the first place than the study itself, though I do have my issues with their analogy bc it's simply clownish to compare a crow to a mathematical construct purposely created to internalize the rules and syntax of language.

    Also that journal has a high schooler on the board of editorialists and has no name for itself... not exactly The Journal of Machine Learning Research lol

  • Isn't this basically just what my comment about the edge of the knowable was and you snarkily replied with the Turing Test?

    Like go watch one of the videos I linked if you haven't. I think they'd be really interesting to you, especially the first one.

    I agree with you tho. What are we looking for is the question to ask. By that same notion, I can say with certainty for myself that what we have doesn't reason, but I can't elaborate on what it might take to make up something that does. Just as with obscenity in that famous SC case.

    To elaborate on the Othello point:

    They tested the LLM with a probe and changed a board piece. They used this change and probed the resultant probability distribution to determine whether or not the AI would change its probability distribution to 'prove' that it was creating world representations of the board. The problem is, and this is what makes it kinda fallacious thinking by the study authors, that if you change the input data of course the output data is going to change. That's just a result of training the AI on different legal boardstates, as the way that moves that are made will have a direct result on the placement of the pieces.

    Furthermore, they showed that it outperformed random chance at predicting legal moves, but that's just the way that training AI works. An LLM is better at predicting the next word than random chance as a result of its training.

    If you don't really get what I'm talking about here I recommend this video: https://m.youtube.com/watch?v=wjZofJX0v4M&vl=en

  • He was also a good guy in his community. A church called his number once by accident when asking for donations for reparis and he was like "hold on a minute" and went down to the church with a check for the full amount they had been trying to raise, telling them something like 'I better not see a word about this in the paper'.

    Absolutely a stand up guy

  • I really just don't get why somebody would get emotional over an argument like this but to each their own I suppose. The reason for the emotionality of my reply is rather simply stated: I still don't believe you had any intent to spare anybody 'emotional distress' and were trying to remain aloof and, honestly, rather cunty, by bringing up something literally everybody even mildly interested in AI knows all about as if it's the end all be all of understanding the potential of thinking arising from a machine. On top of that, you purposefully haven't engaged with any of the points directly refuting the things you've said. Honestly, some of the emotionality comes from when I remember being like you, thinking I knew everything, and whenever somebody would hold me to my words I'd do something along the lines of what you're doing (engaging in argumentative discussion dishonestly in order to maintain the appearance of 'winning' when I really should have been learning more and changing my mind instead of bringing up the same tired pop-culture "smart people" bs.)

    Anyway,

    My point wasn't about obscenity. It's about the nebulousness of something like reason, and the Turing test isn't scientific in the first place, so I'm really not sure where you got all this 'science vs law' bs from.

    The point wasn't that reason is like obscenity, but that I can clearly see, from the way that we train LLMs, that they aren't reasoning in any form, rather using values that have been derived over time from the training data fed in and the 'reward' system used to get the right answers over time. An LLM is no more than a complicated calculator, controlled in many ways by the humans that train it, just as with any form of machine learning. Rather that I "know it when I see it"

    I've read some studies on 'game states' which is the closest that ai scientists have come to anything resembling reason, but even in a model that played the relatively simple game of Othello, the metric they were testing the AI (which was trained on data of legal Othello boardstates) against to 'prove' that it was 'thinking' (creating game states) was that it was doing better at choosing legal moves than random chance. Another reason it might have been doing better than random chance? Oh yeah... the training data full of legal boardstates. And when the AI was trained on less data? Oh? Would you look at that? The margin by which it beats random chance falls drastically. Almost like the LLM has no fucking clue what's going on and it's just matching boardstates... indexing. It doesn't understand the rules of Othello; it's just matching piece placement locations with the legal boardstates it was trained on. A human trained on even a few hundred (vs thousands) of such boardstates could likely start to reason out the rules of the game quite easily.

    I'm not even against AI or anything, but to call the machine learning that we have now anything close to true, thinking AI is just foolish talk.