Skip Navigation

Posts
27
Comments
493
Joined
2 yr. ago

  • Yes? But as the person you are responding to has mentioned, they're not after the individuals, they're after the "ISPs who did nothing in response to piracy complaints."

    Having the IP address of those users will reveal which ISP they are using.

    Just run a traceroute or tracert command against any website and you can see for yourself how your connection initially goes through your ISP before branching out to the rest of the internet.

  • Someone please correct me if I'm wrong, but isn't the problem that Uranium has a half-life of a couple hundred million years, while the half life of beryllium is less than a second?

    Only Beryllium-10 has a long half-life for beta decay. Adding another neutron drops that back down to a few seconds and additional neutrons drop it back to a fraction of a second. So as long as that specific type of Beryllium isn't used, it would be fine, right?

    Edit: https://www.thoughtco.com/beryllium-isotopes-603868

  • You can run it locally with an RTX 3090 or less (as long as you have enough RAM), but there's a bit of a tradeoff in speed when using more system RAM vs VRAM.

  • deleted by creator

    Jump
  • Mozilla Monitor used to be just for monitoring breaches but they have recently added in the ability for you to monitor your own personal information that databrokers have on you.

    Edit: According to their FAQ it looks like this has geographic restraints, I'll update my original comment.

  • deleted by creator

    Jump
  • They do have a free tier, and while it doesn't auto request your data removal they can at least notify you which data brokers have your info so you can make the requests manually yourself. https://monitor.mozilla.org/

    Edit: The data removal features are currently available only in the US according to their FAQ:

    Why is data removal only available in the US? When will it be available in my country?

    Data removal is only available in the US because of legislation that allows data brokers to operate there. In many other countries and in regions like the EU, laws like GDPR prevent these websites from collecting and selling people’s personal information without their consent. We’re exploring ways to expand protection and personal data removal outside of the US where needed.

    https://support.mozilla.org/en-US/kb/mozilla-monitor-faq

  • MS Teams. Works for chat, but not for receiving audio/video calls/meetings.

  • It's a pain to switch between accounts. It eats up a ton of CPU if I use it through my browser (unless I use it in Firefox). If I use it in Firefox I can't get video/voice calls or join up on meetings.

    On a mobile device (iOS): It randomly logs me out (more like it will timeout if I haven't opened the app recently). Notifications aren't reliable. If I join a meeting with some other group as a "guest", I can go back to view my active chat, but then I can only hear audio from the meeting and can't get back to see what's happening in the meeting unless I leave the room and come back.

    There's more, but this is just off the top of my head.

  • GrayJay has been great for Android, I haven't had any issues watching YouTube.

    Using the built-in Ad blocking on Brave browser (both Android and Desktop) I haven't had any slowness or issues with YouTube at all.

  • Unfortunately I think that ship has sailed.

    The meaning of AI has changed drastically within the past 10 years or so.

    Back then 'AI' was a term reserved for Artificially Intelligent beings like Skynet, HAL, the machines from The Matrix, etc.

    Today AI has been watered down to the point that we need to specify what kind of AI we're referring to.

    I'm not sure there's a way to stop that unless you unleash a swarm of very convincing social media accounts across the internet all run by LLMs with the goal of correcting our current course... that or put them to work writing news articles like this one.

  • It's on the person using any AI tools to verify that they aren't infringing on anything if they try to market/sell something generated by these tools.

    That goes for using ChatGPT just as much as it goes for Midjourney/Dall-E 3, tools that create music, etc.

    And you're absolutely right, this is going to be a problem more and more for anyone using AI Tools and I'm curious to see how that will factor in to future lawsuits.

    I could see some new factor for fair use being raised in court, or else taking this into account under one of the pre-existing factors.

  • is it different from observing a video tape?

    I would think that it's different, only because you have the potential to alter what could happen.

    Does traveling back in time guarantee that someone would react the same way in the same situation even?

    Maybe, maybe not, we're entering the realm of Schrödinger's cat as well as how time travel would actually work. Do we create some new branched timeline in travelling back? Do we enter an alternate universe entirely? Do we have a time machine where paradoxes are a problem? And the list can go on.

  • Fun thought experiment:

    Let's say we have a time machine and we can go back in time to a specific moment to observe how someone reacts to something.

    If that person reacts the same way every time, does that mean that by knowing what they would do, you have removed their free will?

  • I tried it once... almost got locked out, but luckily still had a session logged in somewhere. Lemmy dID not handle 2FA well a few months ago... hopefully it has changed since then. I like to enable it where I can.

  • I don't agree that it's a fake vs fake issue here.

    Even if the "real" photos were touched up in Lightroom or Photoshop, those are tools that actual photographers use.

    It goes to show that there are cases where photos of real people look more AI generated than not.

    The problem here is that we start second guessing whether a photo was AI generated or not and we run into cases where real artists are being told that they need to find a "different style" to avoid it looking too much like AI generated photos.

    If that wasn't a perfect example for you then maybe this one is better: https://www.pcgamer.com/artist-banned-from-art-subreddit-because-their-work-looked-ai-generated/

    Now think of what can happen to an artist if they publish something in California that has a style that makes it look somewhat AI generated.

    The problem with this law is that it will be weaponized against certain individuals or smaller companies.

    It doesn't matter if they can eventually prove that the photo wasn't AI generated or not. The damage will be done after they are put through the court system. Having a law where you can put someone through that system just because something "looks" AI generated is a bad idea.

    Edit: And the intent of that law is also to include AI text generation. Just think of all the students being accused of using AI for their homework and how reliable other tools have been for determining whether their work is AI generated or not.

    We're going to unleash that on authors as well?

  • The problem here will be when companies start accusing smaller competitors/startups of using AI when they haven't used it at all.

    It's getting harder and harder to tell when a photograph is AI generated or not. Sometimes they're obvious, but it makes you second guess even legitimate photographs of people because you noticed that they have 6 fingers or their face looks a little off.

    A perfect example of this was posted recently where, 80-90% of people thought that the AI pictures were real pictures and that the Real pictures were AI generated.

    https://web.archive.org/web/20240122054948/https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

    And where do you draw the line? What if I used AI to remove a single item in the background like a trashcan? Do I need to go back and watermark anything that's already been generated?

    What if I used AI to upscale an image or colorize it? What if I used AI to come up with ideas, and then painted it in?

    And what does this actually solve? Anyone running a misinformation campaign is just going to remove the watermark and it would give us a false sense of "this can't be AI, it doesn't have a watermark".

    The actual text in the bill doesn't offer any answers. So far it's just a statement that they want to implement something "to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence."

    https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB942