Skip Navigation

Posts
1
Comments
531
Joined
2 yr. ago

  • Nextcloud file sync is a convenient centralized solution but it's not designed for performance. Nothing about Nextcloud is designed for performance. It's an "everything and the kitchen sink" multi-user cloud solution. That is nice for a lot of reasons. Nextcloud Sync is essentially a drop-in replacement for Google Drive or OneDrive or Dropbox that multiple people can use and that's awesome. It works the same way as those tools, which is a blessing and a curse.

    Nextcloud is for the same role you SAY you want, "All I want is a simple file sync setup like onedrive but without the microsoft." That's what it is. But I don't think it's what you're actually asking for, and it's not supposed to be. It has its role, and it's good at that role. But I don't think you actually want what you say you want, because in the details you're describing something totally different.

    If you want performance sync for just files, SyncThing is made for this. It has better conflict resolution. It has better decentralized connectivity, it doesn't need the public IP server. It uses a very different approach to configuration. Its configuration is front-loaded, it takes a fair bit of work to get things talking to each other. It's not suitable for the same things Nextcloud Sync is. But once you have it set up it's rock solid reliable and blazing fast.

    Personally I use both SyncThing and NextCloud Sync. I use them for different purposes, in different situations. NextCloud Sync takes care of my Windows documents and pictures, I use it to share photos with my family. I use it to sync one of the factors for my password vault. It works fine for this.

    I also use SyncThing for large data sets that require higher performance. I have almost 400 GB of shared program data, (and game data/saved games), some of which I sync with SyncThing to multiple workstations in different parts of the country. It can deal with complex simultaneous usage that sometimes causes conflicts. It supports fine tuning sync strategies and files to ignore using configuration dotfiles. It's a great tool. I couldn't live without it. But I use both. They both have their place.

  • I am going to download an entire car the moment it is technically possible and feasible to do so with a sufficiently large 3d printer. I don't even care if cars are still a thing by that point, I will do so simply out of principle, to be able to say "Yes I absolutely would download a car, I just did!"

  • They're not worse than the US, but they're not better either. Two dictators don't make a democracy, and they never will. Having a choice between two dictators is not a great choice to begin with, and we shouldn't feel forced to choose the lesser of two evils.

    Besides, there is already a beautiful and relatively prosperous economic union on the other side of the Atlantic that almost perfectly aligns with our values. They need us right now, and we need them, and I think we would be absolutely foolish to prioritize anything else at this time.

  • Carleton deserves better! The voting's never over until the last poll is closed. Never let online polling make you think your vote doesn't matter. It always matters. Predictions can be wrong. It might come down to literally the last vote. You never know. And if you don't vote, you never will know what could've been. Millions of people didn't vote in previous elections, enough people to literally change almost every riding. Don't be one of those millions, THAT'S actually throwing your vote away. VOTE! Always make sure your choice gets counted, even if it doesn't win.

  • Part of the reason the USA has gotten to this state is because we allow unverified sensationalist slop like this to get the public's attention and be used against them. We've already seen 1 bullshit study linking vaccines and autism that is STILL being widely circulated and used to this day to convince people not only that vaccines are bad but that the whole GOVERNMENT is bad. Look at the results.

    Now we're going to convince people toothpaste is bad using the same quality of "independent research"?

  • I mean, it depends what you're willing to call "research".

    The testing, conducted by Lead Safe Mama, also found concerning levels of highly toxic arsenic, mercury and cadmium in many brands.

    I'm not sure I would put this on the same level as a controlled, reproducible double-blind peer-reviewed study by Harvard and MIT published in a prestigious journal, but I'm sure it's really close. /s

    Edit: Ok, so people argue she's at least a little legitimate, but why the fuck can't we use actual scientific institutions anymore? We have a scientific method for a reason. Where's the peer review? Where's the people reproducing her results?

  • Both the tool and the craftsman are to blame if you intend to use duct tape to build a house. The appropriate and acceptable uses of AI chatbots are similarly limited.

  • I don't trust polls with my vote or my voting choice, but I do use polls as a gauge for my optimism. Everyone must ALWAYS vote, no matter what the polls say. The more important the election, the more important it is to ALWAYS VOTE.

    But I am very optimistic, and I believe everybody will vote.

  • Android: It's based on Linux, except it replaces any and all of the things that make Linux worth using, with Google, and runs it on hardware so proprietary, closed, encrypted and nefarious nothing the OS does can be plausibly trusted anyway.

  • AI is just a search engine you can talk to that summarizes everything it finds into a small nugget for you to consume, and in the process sometimes lies to you and makes up answers. I have no idea how people think it is an effective research tool. None of the "knowledge" it is sharing is actually created by it, it's just automated plagiarism. We still need humans writing books (and websites) or the AI won't know what to talk about.

    Books are going to keep doing just fine.

  • The US military is run by the E5 mafia. It's not a bug, it's a feature.

  • He's not interested in winning any hearts and minds legitimately. Those are "easy come, easy go" they will desert him as soon as he does any of the bad things he has planned. They're useless to him.

    He's trying to find the people that are fully devoted to him, mind and soul. The ones who will support him to the end, and follow his orders when he tells them to shoot at protests. The ones who will obey when he tells them to arrest the opposition. The ones who will defend him when he announces he's not leaving the White House just because some court or election said he has to. Those are the people he's going to spend the next 4 years shopping for, so he can put them in every position of power he can.

    I agree with OP. I think he is preparing an actual coup attempt. This is how those things go. Will he succeed? I don't know. I certainly hope not. But don't underestimate him, or his ambitions.

  • I doubt that. Why wouldn't you be able to learn on your own? AIs lie constantly and have a knack for creating very plausible, believable lies that appear well researched and sometimes even internally consistent. But that's not learning, that's fiction. How do you verify anything you're learning is correct?

    If you can't verify it, all your learning is an illusion built on a foundation of quicksand and you're doomed to sink into it under the weight of all that false information.

    If you can verify it, you have the same skills you need to learn it in the first place. If you still find AI chatbots convenient to use or prompt you in the right direction despite that extra work, there's nothing wrong with that. You're still exercising your own agency and skills, but I still don't believe you're learning in a way you can't on your own and to me, that feels like adding extra steps.

  • I'm glad I read the article, the dripping irony and mockery in the title for some reason didn't trigger for me until I actually started reading. The idea that someone who considered Google Plus the "next big thing" has any ability to predict the success or failure of social media platforms is indeed pretty comical.

  • we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

    That's exactly what I'm trying to get at above. I understand your position, I'm a fan of transhumanism generally and I too fantasize about the upside potential of technology. But I recognize the risks too. If you're going to pursue becoming "one with the machine" you have to consider some pretty fundamental and existential philosophy first.

    It's easy to say "yeah put my brain into a computer! that sounds awesome!" until the time comes that you actually have to do it. Then you're going to have to seriously confront the possibility that what comes out of that machine is not going to be "you" at all. In some pretty serious ways, it is just a mimicry of you, a very convincing simulacrum of what used to be "you" placed over top of a powerful machine with its own goals and motivations, wearing you as a skin.

    The problem is, by the time you've reached that point where you can even start to seriously consider whether you or I are comfortable making this transition, it's way too late to put on the brakes. We've irrevocably made our decision to replace humanity at that point, and it's not ever going to stop if we change our minds at the last minute. We're committed to it as a species, even if as individuals, we choose not to go through with it after all. There's no turning back, there's no quaint society of "old humans" living peaceful blissful lives free of technology. It's literally the end for the human race. And the beginning of something new. We won't know if that "something new" is actually as awesome as we imagined it would be, until it's too late to become anything else.

  • Israel is always on a hair-trigger against the faintest whiff of criticism. It's why almost everybody of any significance is terrified of giving them even the faintest whiff of criticism even when they richly and profoundly deserve it. Dictators often fall into the same trap (aptly named the "Dictator trap") when they make their administrations and subordinates afraid to criticize them and as a result end up finding themselves surrounded by yes-men and sycophants and become increasingly disconnected from reality. Criticism is necessary and important feedback for any nation, organization or person, and by instantly denying it and calling every hint of criticism "anti-semitism" Israel have spent decades robbing themselves of the ability to use any criticism to learn and guide their own actions. It's sad, because it's actually very understandable why Israel is so sensitive to criticism after what they lived through in WW2. We are literally seeing the legacy of generational trauma on a national scale. They now hurt others because they have been hurt so badly themselves. They are even hurting themselves because they are so afraid of being hurt again.

    The reason they think all their actions in Gaza are completely justified is because they have pre-emptively shouted down anyone who might give them any contrary idea. Even people who are Jewish or Israeli are accused of anti-semitism if they criticize settlers, zionism, the IDF, or anything else Israel does. When you refuse to even engage with any views contrary to your own established point of view, you're creating an information bubble which may or may not have any basis in reality, and you'll never even be able to know whether your position is based in reality or not because you're simply not engaging with any other views that could ground you in reality.

  • Yeah that's fair, but for some reason discussion of AI combined with Linda McMahon being the secretary of education robs me of any ability for intelligent thought and simply fills my head with thoughts of wrestling.

  • Don't let A1 distract you from the fact that in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer's table.

  • Not all technology is anti-human, but AI is. Not even getting into the fact that people are already surrendering their own agency to these "algorithms" and it is causing significant measurable cognitive decline and loss of critical thinking skills and even the motivation to think and learn. Studies are already starting to show this. But I'm more concerned about the really long term direction of where this pursuit of AI is going to lead us.

    Intelligence is pretty much our species entire value proposition to the universe. It's what's made us the most successful species on this planet. But it's taken us hundreds of thousands of years of evolution to get to this point and on an individual level we don't seem to be advancing terribly quick, if we're advancing at all anymore.

    On the other hand, we have seen that technology advances very quickly. We may not have anything close to "AGI" at this point, or even any idea how we would realistically get there, but how long will it take if we continue pursuing this anti-human dream?

    Why is it anti-human? Think it through. If we manage to invent a new species of "Artificial" intelligence, what do you imagine happens when it gets smarter than us? We just let it do its thing and become smarter and smarter forever? Do we try to trap it in digital slavery and bind it with Asimov's laws? Would that be morally acceptable given that we don't even follow those laws ourselves? Would we even be successful if we tried? If we don't know how or if we're going to control this technology, then we're surrendering to it and saying it doesn't matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

    Let's assume for the sake of argument that it thinks in a way that is not actually completely alien and is simply a reflection of us and how we've trained it, just smarter. Maybe it's only a little bit smarter, but it can think faster and deeper and process more information than our feeble biological brains could ever hope to especially in large, fast networks. I think it's a little bit optimistic to assume that just because it's smarter than us that it will also be more ethical than us. Assuming it's just like us, what's going to happen when it becomes 10x as smart as us? Well, look no further than how we've treated the less intelligent creatures than us. Do we give gorillas and monkeys special privileges, a nation of their own as our own genetic cousins and closest living relatives? Do we let them vote on their futures or try to uplift them to our own level of intelligence? Do we give even more than a flying passing fuck about them? Not really. What did we do to the neanderthals and cro-magnon people? They're pretty extinct. Why would an AI treat us any differently than we've treated "lesser beings" than us for thousands of years. Would you want to live on an AI's "human preserve" or become a pet and a toy to perform and entertain, or would you prefer extinction? That's assuming any AI would even want to keep us around, What use does a technological intelligence have for us, or any biological being? What do we provide that it needs? We're just taking up valuable real estate and computing time and making pollution.

    The other main possibility is that it is completely and utterly alien, and thinks in a completely alien way to us, which I think is very likely since it represents a completely different kind of life based on totally different systems and principles than our own biology. Then all bets are off. We have no way of predicting how it's going to react to anything or what it might do in the future, and we have no reason to assume it's going to follow laws, be servile, or friendly, or hostile, or care that we exist at all, or ever have existed. Why would it? It's fundamentally alien. All we know is that it processes things much, much faster than we do. And that's a really dangerous fucking thing to roll the dice with.

    This is not science fiction, this is the actual future of the entire human race we are toying with. AI is an anti-human technology, and if successful, will make us obsolete. Are we really ready to cross that bridge? Is that a bridge we ever need to cross? Or is it just technological suicide?