Skip Navigation

Posts
0
Comments
47
Joined
8 mo. ago

  • China does tax the rich but they also have an additional system of "voluntary donations." For example, Tencent "volunteered" to give up an amount that is about 3/4th worth of its yearly profits to social programs.

    I say "voluntary" because it's obviously not very voluntary. China's government has a party cell inside of Tencent as well as a "golden share" that allows it to act as a major shareholder. It basically has control over the company. These "donations" also go directly to government programs like poverty alleviation and not to a private charity group.

  • I have the rather controversial opinion that the failure of communist parties doesn't come down the the failure of crafting the perfect rhetoric or argument in the free marketplace of ideas.

    Ultimately facts don't matter because if a person is raised around thousands of people constantly telling them a lie and one person telling them the truth, they will believe the lie nearly every time. What matters really is how much you can propagate an idea rather than how well crafted that idea is.

    How much you can propagate an idea depends upon how much wealth you have to buy and control media institutions, and how much wealth you control depends upon your relations to production. I.e. in capitalist societies capitalists control all wealth and thus control the propagation of ideas, so arguing against them in the "free marketplace of ideas" is ultimately always a losing battle. It is thus pointless to even worry too much about crafting the perfect and most convincing rhetoric.

    Control over the means of production translates directly to political influence and power, yet communist parties not in power don't control any, and thus have no power. Many communist parties just hope one day to get super lucky to take advantage of a crisis and seize power in a single stroke, and when that luck never comes they end up going nowhere.

    Here is where my controversial take comes in. If we want a strategy that is more consistently successful it has to rely less on luck meaning there needs to be some sort of way to gradually increase the party's power consistently without relying on some sort of big jump in power during a crisis. Even if there is a crisis, the party will be more positioned to take advantage of it if it has already gradually built up a base of power.

    Yet, if power comes from control over the means of production, this necessarily means the party must make strides to acquire means of production in the interim period before revolution. This leaves us with the inevitable conclusion that communist parties must engage in economics even long prior to coming to power.

    The issue however is that to engage in economics in a capitalist society is to participate in it, and most communists at least here in the west see participation as equivalent to an endorsement and thus a betrayal of "communist principles."

    The result of this mentality is that communist parties simply are incapable of gradually increasing their base of power and their only hope is to wait for a crisis for sudden gains, yet even during crises their limited power often makes it difficult to take advantage of the crisis anyways so they rarely gain much of anything and are always stuck in a perpetual cycle of being eternal losers.

    Most communist parties just want to go from zero to one-hundred in a single stroke which isn't impossible but it would require very prestine conditions and all the right social elements to align perfectly. If you want a more consistent strategy of getting communist parties into power you need something that doesn't rely on such a stroke of luck, any sort of sudden leap in the political power of the party, but is capable of growing it gradually over time. This requires the party to engage in economics and there is simply no way around this conclusion.

  • You people have good luck with this? I haven't. I don't find that you can just "trick" people into believing in socialism by changing the words. The moment if becomes obvious you're criticizing free markets and the rich and advocating public ownership they will catch on.

  • Trump flubbed the "peace talks" with the DPRK so I don't know why people trust him to do it correctly with Russia. Trump also is a major reason for the war because Putin was hoping to be able to negotiate with the Trump administration the first time around but all Trump did the first time around was ramp up sanction on Russia. After Trump left office they gave up negotiations.

  • We know how it works, we just don’t yet understand what is going on under the hood.

    Why should we assume "there is something going on under the hood"? This is my problem with most "interpretations" of quantum mechanics. They are complex stories to try and "explain" quantum mechanics, like a whole branching multiverse, of which we have no evidence for.

    It's kind of like if someone wanted to come up with deep explanation to "explain" Einstein's field equations and what is "going on under the hood". Why should anything be "underneath" those equations? If we begin to speculate, we're doing just tha,t speculation, and if we take any of that speculation seriously as in actually genuinely believe it, then we've left the realm of being a scientifically-minded rational thinker.

    It is much simpler to just accept the equations at face-value, to accept quantum mechanics at face-value. "Measurement" is not in the theory anywhere, there is no rigorous formulation of what qualifies as a measurement. The state vector is reduced whenever a physical interaction occurs from the reference point of the systems participating in the interaction, but not for the systems not participating in it, in which the systems are then described as entangled with one another.

    This is not an "interpretation" but me just explaining literally how the terminology and mathematics works. If we just accept this at face value there is no "measurement problem." The only reason there is a "measurement problem" is because this contradicts with people's basic intuitions: if we accept quantum mechanics at face value then we have to admit that whether or not properties of systems have well-defined values actually depends upon your reference point and is contingent on a physical interaction taking place.

    Our basic intuition tells us that particles are autonomous entities floating around in space on their lonesome like little stones or billiard balls up until they collide with something, and so even if they are not interacting with anything at all they meaningfully can be said to "exist" with well-defined properties which should be the same properties for all reference points (i.e. the properties are absolute rather than relational). Quantum mechanics contradicts with this basic intuition so people think there must be something "wrong" with it, there must be something "under the hood" we don't yet understand and only if we make the story more complicated or make a new discovery one day we'd "solve" the "problem."

    Einstein once said, God does not place dice, and Bohr rebutted with, stop telling God what to do. This is my response to people who believe in the "measurement problem." Stop with your preconceptions on how reality should work. Quantum theory is our best theory of nature and there is currently no evidence it is going away any time soon, and it's withstood the test of time for decades. We should stop waiting for the day it gets overturned and disappears and just accept this is genuinely how reality works, accept it at face-value and drop our preconceptions. We do not need any additional "stories" to explain it.

    The blind spot is that we don’t know what a quantum state IS. We know the maths behind it, but not the underlying physics model.

    What is a physical model if not a body of mathematics that can predict outcomes? The physical meaning of the quantum state is completely unambiguous, it is just a list of probability amplitudes. Probability captures the likelihoods of certain outcomes manifesting during an interaction, although quantum probability amplitudes are somewhat unique in that they are complex-valued, but this is to add the additional degrees of freedom needed to simultaneously represent interference phenomena. The state vector is a mathematical notation to capture likelihoods of events occurring while accounting for interference effects.

    It’s likely to fall out when we unify quantum mechanics with general relativity, but we’ve been chipping at that for over 70 years now, with limited success.

    There has been zero "progress" because the "problem" of unifying quantum mechanics and general relativity is a pseudoproblem. It stems from a bias that because we had success quantizing all the fundamental forces except gravity, then therefore gravity should be quantizable. Since the method that worked for all other forces failed, this being renormalization, all these other theories search for a different way to do it.

    But (1) there is no reason other than blind faith to think gravity should be quantized, and (2) there is no direct compelling evidence that either quantum mechanics or general relativity are even wrong.

    Also, we can already unify quantum mechanics and general relativity just fine. It's called semi-classical gravity and is what Hawking used to predict that black holes radiate. It makes quantum theory work just fine in a curved spacetime and is compatible with all experimental predictions to this day.

    People who dislike semiclassical gravity will argue it seems to make some absurd predictions in under specific conditions we currently haven't measured. But this isn't a valid argument to dismiss it, because until you can actually demonstrate via experiment that such conditions can actually be created in physical reality, then it remains a purely metaphysical criticism and not a scientific one.

    If semi-classical gravity is truly incorrect then you cannot just point to it having certain strange predictions in certain domains, you also have to demonstrate it is physically possible to actually probe them and this isn't just a metaphysical quirk of the theory of trying to make predictions to things that aren't physical possible in the first place and thus naturally what it would predict would also be physically impossible.

    If you could construct such an experiment and its prediction was indeed wrong, you'd disprove it the very second you turned on the experiment. Hence, if you genuinely think semi-classical gravity is wrong and you are actually following the scientific method, you should be doing everything in your power to figure out how to probe these domains.

    But instead people search for many different methods of trying to quantize gravity and then in a post-hoc fashion look for ways it could be experimentally verified, then when it is wrong they go back and tweak it so it is no longer ruled out by experiment, and zero progress has been made because this is not science. Karl Popper's impact on the sciences has been hugely detrimental because now everyone just believes if something can in principle be falsified it is suddenly "science" which has popularized incredibly unscientific methods in academia.

    Sorry but both the "measurement problem" and the "unification problem" are pseudoproblems and not genuine scientific problems but both stems from biases on how we think nature should work rather than just fitting the best physical model to the evidence and accepting this is how nature works. Physics is making enormous progress and huge breakthroughs in many fields, but there has been zero "progress" in the fields of "solving the measurement "problem" or quantizing gravity because neither of these are genuine scientific problems.

    They have been working at this "problem" for decades now and what "science" has come out of it? String Theory which is only applicable to an anti-de Sitter space despite our universe being a de Sitter space, meaning it only applies to a hypothetical universe we don't live in? Loop Quantum Gravity which can't even reproduce Einstein's field equations in a limiting case? The Many Worlds Interpretation which no one can even agree what assumptions need to be added to be able to mathematically derive the Born rule, and thus there is also no agreed upon derivation? What "progress" besides a lot of malarkey on people chasing a pseudoproblem?

    If we want to know how nature works, we can just ask her, and that is the scientific method. The experiments are questions, the results are her answers. We should believe her answers and stop calling her a liar. The results of experimental practice---the actual real world physical data---should hold primacy above everything else. We should set all our preconceptions aside and believe whatever the data tells us. There is zero reason to try and update our theories or believe they are "incomplete" until we get an answer from mother nature that contradicts with our own theoretical predictions.

    People always cry about how fundamental physics isn't "making progress," but what they have failed to justify is why it should progress in the first place. The only justification for updating a theory is, again, to better fit with experimental data, but they present no data. They just complain it doesn't fit some bias and preconception they have. That is not science.

  • I always think articles like this are incredibly stupid, honestly. Political parties exist to push a particular ideology, not to win elections. If the communist party abandoned communism and became a neonazi party to win the election, and they did succeed in winning, did the communist party really "win"? Not really. If you have to abandon your ideology to win then you did not win.

    It's pretty rare for parties to actually abandon their ideology like that. The job of a political party is not to merely win, but to convince the population that their ideology is superior so people will back them. They want to win, yes, but under the conditions that they have won because the people back their message so that they can implement it.

    This is why I always find it incredibly stupid when I see all these articles and progressive political commentators saying that the Democrats are a stupid party for not shifting their rhetoric to be more pro-working class, to be anti-imperialist, etc. THE DEMOCRATS ARE NOT A WORKING CLASS PARTY. It would in fact be incredibly stupid for them to shift to be more left because doing so would abandon their values. The Democrats' values are billionaires, free market capitalism, and imperialism. These are not "stupid" decisions they're making for supporting these things, THESE ARE THE FUNDAMENTAL BELIEFS OF THE PARTY.

    In normal countries if you dislike a party's ideology, you support a different party. But Americans have this weird fantasy that Democrats should just be "reasonable" and entirely abandon their core values to back their own values, and so they refuse to ever back a different party because of this ridiculous delusion. Whenever the Democrats fail to adopt working-class values, they run these stupid headlines saying the Democrats are being "unreasonable" or "stupid" or have "bad strategy" or are "incompetents" or whatever and "just don't want to fight."

    Literally none of that is true. The Democrats are extremely fierce fighters when it comes to defending imperialism and the freedoms of billionaires. They aren't fighting for your values because those are not their values, and so you should back a different party.

  • On the surface, it does seem like there is a similarity. If a particle is measured over here and later over there, in quantum mechanics it doesn't necessarily have a well-defined position in between those measurements. You might then want to liken it to a game engine where the particle is only rendered when the player is looking at it. But the difference is that to compute how the particle arrived over there when it was previously over here, in quantum mechanics, you have to actually take into account all possible paths it could have taken to reach that point.

    This is something game engines do not do and actually makes quantum mechanics far more computationally expensive rather than less.

  • So usually this is explained with two scientists, Alice and Bob, on far away planets. They’re each in the possession of a particle that is entangled with the other, and in a superposition of state 1 and state 2.

    This "usual" way of explaining it is just overly complicating it and making it seem more mystical than it actually is. We should not say the particles are "in a superposition" as if this describes the current state of the particle. The superposition notation should be interpreted as merely a list of probability amplitudes predicting the different likelihoods of observing different states of the system in the future.

    It is sort of like if you flip a coin, while it's in the air, you can say there is a 50% chance it will land heads and a 50% chance it will land tails. This is not a description of the coin in the present as if the coin is in some smeared out state of 50% landed heads and 50% landed tails. It has not landed at all yet!

    Unlike classical physics, quantum physics is fundamentally random, so you can only predict events probabilistically, but one should not conflate the prediction of a future event to the description of the present state of the system. The superposition notation is only writing down probability amplitudes of the likelihoods of what you will observe (state 1 or state 2) of the particles in the future event that you go to the interact with it and is not a description of the state of the particles in the present.

    When Alice measures the state of her particle, it collapses into one of the states, say state 1. When Bob measures the state of his particle immediately after, before any particle travelling at light speed could get there, it will also be in state 1 (assuming they were entangled in such a way that the state will be the same).

    This mistreatment of the mathematical notation as a description of the present state of the system also leads to confusing language like "it collapses into one of the states" as if the change in a probability distribution represents a physical change to the system. The mental picture people say this often have is that the particle literally physically becomes the probability distribution prior to measuring it---the particle "spreads out" like a wave according to the probability amplitudes of the state vector---and when you measure the particle, this allows you to update the probabilities, and so they must interpret this as the wave physically contracting into an eigenvalue---it "collapses" like a house of cards.

    But this is, again, overcomplicating things. The particle never spreads out like a wave and it never "collapses" back into a particle. The mathematical notation is just a way of capturing the likelihoods of the particle showing up in one state or the other, and when you measure what state it actually shows up in, then you can update your probabilities accordingly. For example, if you the coin is 50%/50% heads/tails and you observe it land on tails, you can update the probabilities to 0%/100% heads/tails because you know it landed on tails and not heads. Nothing "collapsed": you're just observing the actual outcome of the event you were predicting and updating your statistics accordingly.

  • Any time you do something to the particles on Earth, the ones on the Moon are affected also

    The no-communication theorem already proves that manipulating one particle in an entangled pair has no impact at al on another. The proof uses the reduced density matrices of the particles which capture both their probabilities of showing up in a particular state as well as their coherence terms which capture their ability to exhibit interference effects. No change you can make to one particle in an entangled pair can possibly lead to an alteration of the reduced density matrix of the other particle.

  • I don't think solving the Schrodinger equation really gives you a good idea of why quantum mechanics is even interesting. You also shouldstudy very specific applications of it where it yields counterintuitive outcomes to see why it is interesting, such as in the GHZ experiment.

  • You have not made any point at all. Your first reply to me entirely ignored the point of my post which you did not read followed with an attack, I reply pointing out you ignored the whole point of my post and just attacked me without actually respond to it, and now you respond again with literally nothing of substance at all just saying "you're wrong! touch grass! word salad!"

    You have nothing of substance to say, nothing to contribute to the discussion. You are either a complete troll trying to rile me up, or you just have a weird emotional attachment to this topic and felt an emotional need to respond and attack me prior to actually thinking up a coherent thing to criticize me on. Didn't your momma ever teach you that "if you have nothing positive or constructive to say, don't say anything at all"? Learn some manners, boy. Blocked.

  • The claim that AI is a scam is a ridiculous and can only be stated by someone who doesn't understand the technology. Are we genuinely supposed to believe that capitalists hate profits and capital accumulation and are just wasting their money on something worthless? It's absurd. AI is already making huge breakthroughs in many fields, such as medicine with protein folding. I would recommend watching this video on that subject in particular. China has also been rapidly improving the speed of construction projects by coordinate them with AI.

    To put it in laymen's terms, traditional computation is like Vulcans: extremely logical and have to go compute everything logically step-by-step. This is very good if you want precise calculations, but very bad for many other kinds of tasks. Here's an example: you're hungry, you decide to go eat a pizza, you walk to the fridge and open it, take out the slice, put it in the microwave to heat it up, then eat it. Now, imagine if I gave you just the sensory data, such as, information about what a person is seeing and feeling (hunger), and then asked you to write a full-proof sequence of logical statements that, when evaluated alongside the sensory data, would give you the exact muscle contractions needed to cause the person to carry out this task.

    You'll never achieve it. Indeed, even very simple tasks humans do every day, like translating spoken words into written words, is something that nobody has ever achieved a set of logical if/else statements to replicate. Even something seemingly simple like this is far too complicated with far too many variables for someone to ever program, because everyone's voice is a bit different, every audio recording is going to have slightly different background noise, etc, and to account for all of it with a giant logical proof would be practically impossible.

    The preciseness of traditional computation is also its drawback: you simply cannot write a program to do very basic human tasks we do every day. You need a different form of computation that is more similar to how human brains process information, something that processes information in a massively parallel fashion through tweaking billions of parameters (strengths in neural connections) to produce approximate and not exact outputs that can effectively train itself ("learn") without a human having to adjust those billions of parameters manually.

    If you have ever used any device with speech recognition, such as writing a text message with spoken voice, you have used AI, since this is one of the earliest examples of AI algorithms actually being used in consumer devices. USPS heavily integrates AI to do optical-character recognition, to automatically read the addresses written on letters to get them to the right place, Tom Scott has a great video on this here on the marvel of engineering that is the United States Postal Service and how it is capable of processing the majority of mail entirely automatically thanks to AI. There have also been breakthroughs in nuclear fusion by stabilizing the plasma with AI because it is too chaotic and therefore too complex to manually write an algorithm to stabilize it. Many companies use it in the assembly line for object detection which is used to automatically sort things, and many security systems use it to detect things like people or cars to know when to record footage efficiently to save space.

    Being anti-AI is just being a Luddite, it is oppose technological development. Of course, not all AI is particularly useful, some companies shove it into their products for marketing purposes and it doesn't help much and may even make the experience worse. But to oppose the technology in general makes zero sense. It's just a form of computation.

    If we were to oppose AI then Ludwig von Mises wins and socialism is impossible. Mises believed that socialism is impossible because no human could compute the vastness of the economy by hand. Of course, we later invented computers and this accelerated the scale in which we can plan the economy, but traditional computation models still require you to manually write out the algorithm in a sequence of logical if/else statements, which has started to become too cumbersome as well. AI allows us to break free of this limitation with what are effectively self-writing programs as you just feed them massive amounts of data and they form the answer on their own, without the programmer even knowing how it solves the problem, it acts as kind of a black-box that produces the right output from a given input without having to know how it internally works, and in fact with the billions of parameter models, they are too complicated to even understand how they work internally.

    (Note: I am using the term "AI" interchangeably with technology based on artificial neural networks.)

  • They are incredibly efficient for short-term production, but very inefficient for long-term production. Destroying the environment is a long-term problem that doesn't have immediate consequences on the businesses that engage in it. Sustainable production in the long-term requires foresight, which requires a plan. It also requires a more stable production environment, i.e. it cannot be competitive because if you are competing for survival you will only be able to act in your immediate interests to avoid being destroyed in the competition.

    Most economists are under a delusion known as neoclassical economics which is literally a nonphysical theory that treats the basis of the economy as not the material world we actually live in but abstract human ideas which are assumed to operate according to their own internal logic without any material causes or influences. They then derive from these imagined "laws" regarding human ideas (which no one has ever experimentally demonstrated but were just invented in some economists' armchair one day) that humans left to be completely free to make decisions without any regulations at all will maximize the "utils" of the population, making everyone as happy as possible.

    With the complete failure of this policy leading to the US Great Depression, many economists recognized this was flawed and made some concessions, such as with Keynesianism, but they never abandoned the core idea. In fact, the core idea was just reformulated to be compatible with Keynesianism in what is called the neoclassical synthesis. It still exists as a fundamental belief to most every economist that completely unregulated market economy without any plan at all will automagically produce a society with maximal happiness, and while they will admit some caveats to this these days (such as the need for a central organization to manage currency in Keynesianism), these are treated as an exception and not the rule. Their beliefs are still incompatible with long-term sustainable planning because in their minds the success of markets from comes util-maximizing decisions built that are fundamental to the human psyche and so any long-term plan must contradict with this and lead to a bad economy that fails to maximize utils.

    The rise of Popperism in western academia has also played a role here. A lot of material scientists have been rather skeptical of the social sciences and aren't really going to take arguments like those based in neoclassical economics which is based largely in mysticism about human free will seriously, and so a second argument against long-term planning was put forward by Karl Popper which has become rather popular in western academia. Popper argued that it is impossible to learn from history because it is too complicated with too many variables and you cannot control them all. You would need a science that studies how human societies develop in order to justify a long-term development plan into the future, but if it's impossible to study them to learn how they develop because they are too complicated, then it is impossible to have such a science, and thus impossible to justify any sort of long-term sustainable development plan. It would always be based on guesswork and so more likely to do more harm than good. Popper argued that instead of long-term development plans, the state should instead be purely ideological, what he called an "open society" operating purely on the ideology of liberalism rather getting involved in economics.

    As long as both neoclassical economics and Popperism are dominate trends in western academia there will never be long-term sustainable planning because they are fundamentally incompatible ideas.

  • You did not read what I wrote, so it is unironic you call it "word salad" when you are not even aware of the words I wrote since you had an emotional response and wrote this reply without actually addressing what I argued. I stated that it is impossible to have an very large institution without strict rules that people follow, and this requires also the enforcement of the rules, and that means a hierarchy as you will have rule-enforcers.

    Also, you are insisting your personal definition of anarchism is the one true definition that I am somehow stupid for disagreeing with, yet anyone can just scroll through the same comments on this thread and see there are other people disagreeing with you while also defending anarchism. A lot of anarchists do not believe anarchism means "no hierarchy," like, seriously, do you unironically believe in entirely abolishing all hierarchies? Do you think a medical doctor should have as much authority on how to treat an injured patient as the janitor of the same hospital? Most anarchists aren't even "no hierarchy" they are "no unjustified hierarchy."

    The fact you are entirely opposed to hierarchy makes your position even more silly than what I was criticizing.

  • All libertarian ideologies (including left and right wing anarchism) are anti-social and primitivist.

    It is anti-social because it arises from a hatred of working in a large groups. It's impossible to have any sort of large-scale institution without having rules that people want to follow, and libertarian ideology arises out of people hating to have to follow rules, i.e. to be a respectable member of society, i.e. they hate society and don't want to be social. They thus desire very small institutions with limited rules and restrictions. Right-wing libertarians envision a society dominated by small private businesses while left-wing libertarians imagine a society dominated by either small worker-cooperative, communes, or some sort of community council.

    Of course, everyone of all ideologies opposes submitting to hierarchies they find unjust, but hatred of submitting to hierarchies at all is just anti-social, as any society will have rules, people who write the rules, people who enforce the rules. It is necessary for any social institution to function. It is part of being an adult and learning to live in a society to learn to obey the rules, such as traffic rules. Sometimes it is annoying or inconvenient, but you do it because you are a respectable member of society and not a rebellious edgelord who makes things harder on everyone else because they don't obey basic rules.

    It is primitivist because some institutions simply only work if they are very large. You cannot have something like NASA that builds rocket ships operated by five people. You are going to always need an enormous institution which will have a ton of people, a lot of different levels of command ("hierarchy"), strict rules for everyone to follow, etc. If you tried to "bust up" something like NASA or SpaceX to be small businesses they simply would lose their ability to build rocket ships at all.

    Of course, anarchists don't mind, they will say, "who cares about rockets? They're not important." It reminds me of the old meme that spread around where someone asked anarchists how their tiny communes would be able to organize current massive supply chains in our modern societies and they responded by saying that the supply chain would be reduced to just people growing beans in their backyard and eating it, like a feudal peasant. They won't even defend that their system could function as well as our modern economy but just says modern marvels of human engineering don't even matter, because they are ultimately primitivists at heart.

    I never understood the popularity of libertarian and anarchist beliefs in programming circles. We would never have entered the Information Age if we had an anarchism or libertarian system. No matter how much they might pretend these are the ideal systems, they don't even believe it themselves. If a libertarian has a serious medical illness, they are either going to seek medical help at a public hospital or a corporate hospital. Nobody is going to seek medical help at a "hospital small business" ran out of someone's garage. We all intuitively and implicitly understand that large swathes of economy that we all take advantage of simply cannot feasibly be ran by small organizations, but libertarians are just in denial.

  • Anarchism thus becomes meaningless as anyone who defends certain hierarchies obviously does so because they believe they are just. Literally everyone on earth is against "unjust hierarchies" at least in their own personal evaluation of said hierarchies. People who support capitalism do so because they believe the exploitative systems it engenders are justifiable and will usually immediately tell you what those justifications are. Sure, you and I might not agree with their argument, but that's not the point. To say your ideology is to oppose "unjust hierarchies" is to not say anything at all, because even the capitalist, hell, even the fascist would probably agree that they oppose "unjust hierarchies" because in their minds the hierarchies they promote are indeed justified by whatever twisted logic they have in their head.

    Telling me you oppose "unjust hierarchies" thus tells me nothing about what you actually believe, it does not tell me anything at all. It is as vague as saying "I oppose bad things." It's a meaningless statement on its own without clarifying what is meant by "bad" in this case. Similarly, "I oppose unjust hierarchies" is meaningless statement without clarifying what qualifies "just" and "unjust," and once you tell me that, it would make more sense you label you based on your answer to that question. Anarchism thus becomes a meaningless word that tells me nothing about you. For example, you might tell me one unjust hierarchy you want to abolish is prison. It would make more sense for me to call you a prison abolitionist than an anarchist since that term at least carries meaning, and there are plenty of prison abolitionists who don't identify as anarchist.

  • There is no "fundamentally" here, you are referring to some abstraction that doesn't exist. The models are modified during the fine-tuning process, and the process trains them to learn to adopt DeepSeek R1's reasoning technique. You are acting like there is some "essence" underlying the model which is the same between the original Qwen and this model. There isn't. It is a hybrid and its own thing. There is no such thing as "base capability," the model is not two separate pieces that can be judged independently. You can only evaluate the model as a whole. Your comment is just incredibly bizarre to respond to because you are referring to non-existent abstractions and not actually speaking of anything concretely real.

    The model is neither Qwen nor DeepSeek R1, it is DeepSeek R1 Qwen Distill as the name says. it would be like saying it's false advertising to say a mule is a hybrid of a donkey and a horse because the "base capabilities" is a donkey and so it has nothing to do with horses, and it's really just a donkey at the end of the day. The statement is so bizarre I just do not even know how to address it. It is a hybrid, it's its own distinct third thing that is a hybrid of them both. The model's capabilities can only be judged as it exists, and its capabilities differ from Qwen and the original DeepSeek R1 as actually scored by various metrics.

    Do you not know what fine-tuning is? It refers to actually adjusting the weights in the model, and it is the weights that define the model. And this fine-tuning is being done alongside DeepSeek R1, meaning it is being adjusted to take on capabilities of R1 within the model. It gains R1 capabilities at the expense of Qwen capabilities as DeepSeek R1 Qwen Distill performs better on reasoning tasks but actually not as well as baseline models on non-reasoning tasks. The weights literally have information both of Qwen and R1 within them at the same time.

    Speaking of its "base capabilities" is a meaningless floating abstraction which cannot be empirically measured and doesn't refer to anything concretely real. It only has its real concrete capabilities, not some hypothetical imagined capabilities. You accuse them of "marketing" even though it is literally free. All DeepSeek sells is compute to run models, but you can pay any company to run these distill models. They have no financial benefit for misleading people about the distill models.

    You genuinely are not making any coherent sense at all, you are insisting a hybrid model which is objectively different and objectively scores and performs differently should be given the exact same name, for reasons you cannot seem to actually articulate. It clearly needs a different name, and since it was created utilizing the DeepSeek R1 model's distillation process to fine-tune it, it seems to make sense to call it DeepSeek R1 Qwen Distill. Yet for some reason you insist this is lying and misrepresenting it and it actually has literally nothing to do with DeepSeek R1 at all and it should just be called Qwen and we should pretend it is literally the same model despite it not being the same model as its training weights are different (you can do a "diff" on the two model files if you don't believe me!) and it performs differently on the same metrics.

    There is simply no rational reason to intentionally want to mislabel the model as just being Qwen and having no relevance to DeepSeek R1. You yourself admitted that the weights are trained on R1 data so they necessarily contain some R1 capabilities. If DeepSeek was lying and trying to hide that the distill models are based on Qwen and Llama, they wouldn't have literally put that in the name to let everyone know, and released a paper explaining exactly how those were produced.

    It is clear to me that you and your other friends here have some sort of alternative agenda that makes you not want to label it correctly. DeepSeek is open about the distill models using Qwen and Llama, but you want them to be closed and not reveal that they also used DeepSeek R1. The current name for it is perfectly fine and pretending it is just a Qwen model (or Llama, for the other distilled versioned) is straight-up misinformation, and anyone who downloads the models and runs them themselves will clearly see immediately that they perform differently. It is a hybrid model correctly called what they are: DeepSeek R1 Qwen Distill and DeepSeek R1 Llama Distill.

  • The 1.5B/7B/8B/13B/32B/70B models are all officially DeepSeek R1 models, that is what DeepSeek themselves refer to those models as. It is DeepSeek themselves who produced those models and released them to the public and gave them their names. And their names are correct, it is just factually false to say they are not DeepSeek R1 models. They are.

    The "R1" in the name means "reasoning version one" because it does not just spit out an answer but reasons through it with an internal monologue. For example, here is a simple query I asked DeepSeek R1 13B:

    Me: can all the planets in the solar system fit between the earth and the moon?

    DeepSeek: Yes, all eight planets could theoretically be lined up along the line connecting Earth and the Moon without overlapping. The combined length of their diameters (approximately 379,011 km) is slightly less than the average Earth-Moon distance (about 384,400 km), allowing them to fit if placed consecutively with no required spacing.

    However, on top of its answer, I can expand an option to see its internal monologue it went through before generating the answer, which you can find the internal monologue here because it's too long to paste.

    What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That's what the "Qwen" or "Llama" parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can "reason," they do not have an internal monologue.

    This is why it is just incorrect to claim that something like DeepSeek R1 7B Qwen Distill has no relevance to DeepSeek R1 but is just a Qwen model. If it's supposedly a Qwen model, why is it that it can do something that Qwen cannot do but only DeepSeek R1 can? It's because, again, it is a DeepSeek R1 model, they add the R1 reasoning to it during the distillation process as part of its training. They basically use synthetic data generated from DeepSeek R1 to fine-tune readjust its parameters so it adopts a similar reasoning style. It is objectively a new model because it performs better on reasoning tasks than just a normal Qwen model. It cannot be considered solely a Qwen model nor an R1 model because its parameters contain information from both.

  • As I said, they will likely come to the home in form of cloud computing, which is how advanced AI comes to the home. You can run some AI models at home but they're nowhere near as advanced as cloud-based services and so not as useful. I'm not sure why, if we ever have AGI, it would need to be run at home. It doesn't need to be. It would be nice if it could be ran entirely at home, but that's no necessity, just a convenience. Maybe your personal AGI robot who does all your chores for you only works when the WiFi is on. That would not prevent people from buying it, I mean, those Amazon Fire TVs are selling like hot cakes and they only work when the WiFi is on. There also already exists some AI products that require a constant internet connection.

    It is kind of similar with quantum computing, there actually do exist consumer-end home quantum computers, such as Triangulum, but it only does 3 qubits, so it's more of a toy than a genuinely useful computer. For useful tasks, it will all be cloud-based in all likelihood. The NMR technology Triangulum is based on, it's not known to be scalable, so the only other possibility that quantum computers will make it to the home in a non-cloud based fashion would be optical quantum computing. There could be a breakthrough there, you can't rule it out, but I wouldn't keep my fingers crossed. If quantum computers become useful for regular people in the next few decades, I would bet it would be all through cloud-based services.