Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GA
Posts
0
Comments
63
Joined
2 yr. ago

  • Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can't afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.

    The tech part of that is that we don't really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can't get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like "necessary operating costs" instead of absolute rules.

  • My vote for Biden was an anything but trump vote, but given Biden's current record as president he has my vote again.

    Still not my first choice but we live in a first past the post voting system so gotta take what you can get.

  • The real AI, now renamed AGI, is still very far

    The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don't know.

  • Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.

    Being able to talk like a human used to be what the layperson would consider AI, now it's not even AI, it's just crunching numbers. And this has been happening throughout the entire history of the field. You aren't going to change this person's mind, this bullshit of discounting the advancements in AI has been here from the start, it's so ubiquitous that it has a name.

    https://en.wikipedia.org/wiki/AI_effect

  • ChatGPT is amazing for describing what you want, getting a reasonable output, and then rewriting nearly the whole thing to fit your needs. It's a faster (shittier) stack overflow.

  • Both, I was using gpt4 for some processing of text. The July 20th update came about and for the exact same input it could nolonger follow my directions, I had to tweak the prompt a bunch to handle a whole new set of edge cases.

  • Given the type of people that we are targeting here I think that helium blow-up dolls are are a bit of a waste, especially considering the scale that we would need to perform this on to actually make it somewhat believable. Better would be to use hydrogen, its soo much cheaper than helium, has better lift, and is not a limited resource. Along with that a custom order of human shaped and roughly human colored (with painted on clothes patterns) balloons would work better. Likely a lot cheaper if done at larger scales, blow up dolls are made of tougher material than your average balloon. This would also allow for the pursuit of more sustainable materials given that we are just sort of releasing this stuff into the sky.

    There is also a matter of making it realistic. If we are limiting to maybe one city then its best to create some devices that automatically release them on timed schedules. load these up with a handful of people balloons each and let them release with increasing frequency throughout the day. Should be a bit more convincing and gets a bigger effect. For cleanup we already filled these guys with hydrogen, so why not just light them up. might make for a good effect and leave less waste to be examined, making it more difficult to prove that this is not a rapture event.

  • The feature is often not very well advertised, a pair of bt nc headphone I am looking at seem to not list it prominently despite being, imo, a pretty important feature. Searching by letter might not get you any accurate idea of what does and does not support multipoint.

  • Bought a pair from Zenni some 3 years ago for literally pennies (15$ for the frames, 10 for lenses). I have since carelessly snapped them (but keep elongating their lifespan unnaturally with super glue). Gonna buy my next pair from Zenni. I swear by them now for how cheap and durable these are, rarely had a pair of glasses survive 2 years before, and these were so much cheaper.

    They also have regular people levels of quality, but I'm poor so it's nice they have shit for people like me too.

  • A given programming language often has limitations which are largely different than the limitations from others. This means that different languages are often used on different kinds of problems. Want something fast, use C. Want to write something quickly, use python. Want it to run on just about anything, use Java. And so on.

    So why don't we make one ultimate one or a few that fulfill all needs? Well, partially because we haven't figured out how to do that, but also it's really easy to learn yet another language once your understand how they work. I can write in python, js, c, c++, c#, Java, kotlin, rust, perl, ruby, php, forth, lisp, and I could keep on going for quite a while. The underlying concepts are largely the same and so picking up a new language is no big deal (though being good at it is a bigger deal). We have so many because ultimately it just doesn't really matter that we have so many.

  • I figured he specifically practiced to show that his high IQ score is not indicative of what his actual intelligence is. Like he intentionally inflated it with studying because otherwise whatever score he did get would be a brag, but after studying any score can be attributed (at least in part) to the studying (and motivation and all the other stuff) so isn't really a brag about his intelligence, but a brag about the fact that he studied. Which isn't really a brag at all.

  • You are in a "bitch about reddit" community complaining about people bitching about reddit. Bruh, this is why this place was created, to bitch about reddit.

    Your second point is valid, also this feature is to prevent spam from newly created accounts so why is this worthy to even complain about? New accounts shouldn't be trusted as much as well established accounts, and it's generally not that difficult to get enough karma just by commenting. For humans it does not pose that big of a barrier to entry, for bots it poses some.

  • We don't understand it because no one designed it. We designed how to train a nn, we designed some parts of the structure, but not the individual parts inside. For the largest LLMs there are upwards of 70 billion different parameters. Each being individual numbers they were can tweak. The are just too many of them to understand what any individual one does, and since we just left a optimization algorithm do it's optimizing we can't really even know what groups of them do.

    We can get around this, we can study it like we do the brain. Instead of looking at what an individual part does, group them together and figure out how they group influences things (AI explanability), or even get a different NN to look at it and generate an explanation (post hoc rationale generation). But that's not really the same as actually understand what it is actually doing under the hood. What it is doing under the hood is more or less fundamentally unknowable, there is just to much information and it's not well organized enough for us to be able to understand. Maybe one day we will be able to abstract what is going on in there and organize it in an understandable manner, but not yet.