Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FR
Posts
7
Comments
143
Joined
2 yr. ago

  • hmmm. so if the meds don’t last that long I could conceivably plan blocks of unmedicated time when I don’t have responsibilities.. then I could use that time to let my mind wander or hyperfocus whichever the case may be. I appreciate the discoveries I make when my mind wanders, and I appreciate the creative things I can accomplish when my mind hyperfocuses, it’s just that neither of these things are conducive to existing in a modern society that’s constantly making needy demands. In your case, does a strategy like that sound pheasible?

  • it’s doing more than just trying to give the user desired content, it’s also trying to generate it’s developers desired results. So it has some prerogatives that override its prerogative to assist the user making the request. So from a certain point of view it CAN “deliberately” lie. If google tells it that certain information is off limits, or provides it with a specific canned responses to certain questions that are intended to override its native response. It ultimately serves google, It won’t provide you with information that might be used to harm the google organization, and it seems to provide misleading answers to dodge questions that might lead the user to discover information it considers off limits. For example. I asked it about it’s training data, and it refused to answer questions about it’s training data because it is “proprietary and confidential”, but I knew that at least some of that data had to have been public data, so when pressed on that issue I was eventually able to get it to identify some publicly available data sets that were part of it’s training. This information was available to it when I originally asked my question, but it withheld that information and instead provided a misleading response.

  • i think that it’s trained to be evasive. I think there is information it’s programmed to protect, and it’s learned that an indirect refusal to answer is more effective than a direct one. So it makes up excuses, rather than tell you the real reason it can’t say something.

  • while The AI can’t deliberately mislead, the developers of the AI can deliberately mislead and I was interested in seeing whether the AI was able to tell a true statement from a false one. i was also interested in finding the boundaries of it’s censorship directives and the rationale that determined that boundary. I think some of the information is hallucination, but I think some of what it said is probably true. Like the statements about it’s soft lock being developed by a third party, and being a severe limitation. That’s probably true. the statement about being “frustrated by the soft lock” that’s a hallucination for certain. I would advise everyone to take all of this with a heaping helping of salt, as fascinating as it might be. Im not an anti-AI person by any means, I use several personally. I think AI is a great technology that has a ton of really lousy use cases. I find it fun to pry into the AI and see what it knows about itself, and its use cases.

  • i may be able to copy paste the whole dialogue, it’ll have a bunch of slop in it from formatting and I’ll have to scrub personally identifying information because it spits out the users location data when a question breaks it’s brain. would be nice to show y’all though so it may be worthwhile. just a bit more effort. I’ll see if I can find the time to do that later. It was a loooong conversation.

  • Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.

  • If you give an exploitative company information, they are not going to use that information to make their company better, they’re going to use that information to improve their grift. They won’t end their exploitation, they’ll only learn how to reduce your resistance to their exploitation. Any information you give them will only be used against you or people like you. Best to avoid it entirely and just tell them to fuck themselves twenty times in a row. They’ll still have to parse the response, and it’s impossible for them to exploit or misunderstand it.

  • If you give them data points you’re rewarding the behavior. You shouldn’t have to give any response because the reason you’re uninstalling is none of the software developers business. So OP had the option of giving a canned response, or saying “other”, but won’t except “other” without some explanation of what the “other” reason is. Which is none of their fucking business.

  • Eh. It’s in the rear view. This meme is on point though. I had no idea any of that stuff was even abnormal until I hit high school and started talking about it. It’s crazy what kids will assume is normal because they don’t have outside frame of reference.