Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ST
Posts
1
Comments
33
Joined
8 mo. ago

  • Why not create comparison like "generating 1000 words of your fanfiction consumes as much energy as you do all day" or something more easily to compare.

    Considering that you can generate 1000 words in a single prompt to ChatGPT, the energy to do that would be about 0.3Wh.

    That's about as much energy as a typical desktop would use in about 8 seconds while browsing the fediverse (assuming a desktop consuming energy at a rate of ~150W).

    Or, on the other end of the spectrum, if you're browsing the fediverse on Voyager with a smartphone consuming energy at a rate of 2W, then that would be about 9 minutes of browsing the fediverse (4.5 minutes if using a regular browser app in my case since it bumped up the energy usage to ~4W).

  • I agree with your comment except that I think you've got the privacy part wrong there. Any company can come in and scrape all the information they want, including upvote and downvote info.

    In addition, if you try to delete a comment, it's very likely that it won't be deleted by every instance who federates with yours.

  • I think you mean that you can choose a project that doesn't have an "algorithm" (in the sense that you're conveying).

    Anyone can create a project with ActivityPub that has an algorithm for feeding content to you.

  • I think this would only be acceptable if the "AI-assisted" system kicks in when call volumes are high (when dispatchers are overburdened with calls).

    For anyone that's been in a situation where you're frantically trying to get ahold of 911, and you have to make 10 calls to do so, a system like this would have been really useful to help relieve whatever call volumes situation was going on at the time. At least in my experience it didn't matter too much because the guy had already been dead for a bit.

    And for those of you who are dispatchers, I get it, it can be frustrating to get 911 calls all the time for the most ridiculous of reasons, but still I think it would be best if a system like this only kicks in when necessary.

    Being able to talk to a human right away is way better than essentially being asked to "press 1 if this is really an emergency, press 2 if this is not an emergency".

  • I had to click to figure out just what an "AI Browser" is.

    It's basically Copilot/Recall but only for your browser. If the models are run locally, the information is protected, and none of that information is transmitted, then I don't see a problem with this (although they would have to prove it with being open source). But, as it is, this just looks like a browser with major privacy/security flaws.

    At launch, Dia’s core feature is its AI assistant, which you can invoke at any time. It’s not just a chatbot floating on top of your browser, but rather a context-aware assistant that sees your tabs, your open sessions, and your digital patterns. You can use it to summarize web pages, compare info across tabs, draft emails based on your writing style, or even reference past searches.

    Reading into it a bit more:

    Agrawal is also careful to note that all your data is stored and encrypted on your computer. “Whenever stuff is sent up to our service for processing,” he says, “it stays up there for milliseconds and then it’s wiped.” Arc has had a few security issues over time, and Agrawal says repeatedly that privacy and security have been core to Dia’s development from the very beginning. Over time, he hopes almost everything in Dia can happen locally.

    Yeah, the part about sending my data of everything appearing on my browser window (passwords, banking, etc.) to some other computer for processing makes the other assurances worthless. At least they have plans to get everything running locally, but this is a hard pass for me.

  • My question simply relates to whether I can support the software development without supporting lemmy.ml.

    No. You can't support Lemmy without supporting lemmy.ml because the developers use lemmy.ml for testing. They have not created a means for users to separate out their donations from one or the other.

    That's why others are suggesting you should just support a different but similar fediverse project like PieFed or Mbin instead.

  • Yeah, if you're relying on them to be right about anything, you're using it wrong.

    A fine tuned model will go a lot further if you're looking for something specific, but they mostly excel with summarizing text or brainstorming ideas.

    For instance, if you're a Dungeon Master in D&D and the group goes off script, you can quickly generate the back story of some random character that you didn't expect the players to do a deep dive on.

  • Depends on the electric kettle, the first few I looked at on amazon run at ~600-800 Watts.

    So, on the lower end there, you're looking at about 0.166 Wh every second.

    So a single push to chatGPT (0.3 Wh) uses about the same energy as an electric kettle does in less than 2 seconds.

  • While I agree that their comment didn't add much to the discussion, it's possible that you used more electricity to type out your response than it did for them to post theirs.

    It's estimated that a single ChatGPT prompt uses up ~0.3 Wh of electricity.

    If @Empricorn@feddit.nl is on a desktop computer browsing the internet using electricity at a rate of ~150 W, and @TropicalDingdong@lemmy.world is on a smartphone, then you would only have ~16 seconds to type up a response before you begin using more electricity than they did.

  • I think you missed the part at the very end of the page that showed the timeline of them reporting the vulnerability back in April, being rewarded for finding the vulnerability, the vulnerability being patched in May, and being allowed to publicize the vulnerability as of today.

  • Edit: This is probably the wrong community for asking this question since this community is meant for tech related news. c/asklemmy might be better or !technology@piefed.social allows for discussions on anything tech related.

    Smart meters work mostly the same way meters have always worked with one minor difference, they occassionally transmit the current value via a radio frequency. Same as always, you install them at some point where they can measure just how much water/electricity/gas is flowing into the home. The transmitting frequency will be different depending on the device and what country you live in.

    If you want to see the details on how water meters measure water flow, go here: https://en.wikipedia.org/wiki/Water_metering

    If you want the details on how gas meters work with all of the different sensors for that, go here: https://en.wikipedia.org/wiki/Gas_meter

    If you want the details on how electricity meters work, go here and read the "Electromechanical" and "Electronic" sections: https://en.wikipedia.org/wiki/Electricity_meter#Electromechanical

    Some newer meters are setup to attempt to guesstimate additional information such as what is being used in your home. For instances with water meters, a small flow of water for a short time can mean the faucet was turned on, or a toilet was flushed. A larger flow for a longer time can mean that the bathtub is being used, or a shower, or an appliance (dishwasher/laundry), etc.

  • Based on the uptick in "I was banned from Reddit" posts, I'm thinking that we're getting a lot more users that were banned for good reason from Reddit. Looks like Reddit has also stepped up their game in their ability to keep those users off their platform.

  • It's not sending the audio to an unknown server. It's all local. From the article:

    The system then translates the speech and maintains the expressive qualities and volume of each speaker’s voice while running on a device, such mobile devices with an Apple M2 chip like laptops and Apple Vision Pro. (The team avoided using cloud computing because of the privacy concerns with voice cloning.)

  • That's not AI, that's just a bad Photoshop/InDesign job where they layered the text underneath the image of the coupon with Protein bottles. The image has a white background, if it had a clear background there would have been no issue.

    Edit: Looking a little closer, it looks more like some barely off-white arrow was at the top of the coupon image.

    Edit2: if you're talking about the text that looks like a prompt, it could be a prompt, or it could be a description of what they wanted someone to put on the poster. The image itself doesn't look like AI considering those products actually exist and AI usually doesn't do so well on small text when you zoom in on a picture.

  • Highlighting the main issue here (from the article):

    “This means that it is possible for the WhatsApp server to add new members to a group,” Martin R. Albrecht, a researcher at King's College in London, wrote in an email. “A correct client—like the official clients—will display this change but will not prevent it. Thus, any group chat that does not verify who has been added to the chat can potentially have their messages read.”

  • Google sells it as an updated extension framework to improve security, privacy, and performance of extensions... But it also nerfs adblockers ability to block all ads.

    There are some forks from chrome that haven't implemented the new manifest thing. So if you really need to, look for those.

  • Videos @lemmy.world

    Taking a special selfie to prove I’m not AI