Deepseek when asked about sensitive topics
CodexArcanum @ codexarcanum @lemmy.dbzer0.com Posts 5Comments 299Joined 8 mo. ago

That's just markdown syntax I think. Clients vary a lot in which markdown they support though.
It is quite funny (to me also anyway) that it ignores the "I am male" and "have boyfriend" part in it's analysis. Probably there's a lot of 4chan and reddit in the training data (god, I wonder how it would respond to "Are traps gay?") so the general format of "is sucking dick gay" might already be floating around in there a few times.
Also, like I said, the 7B models are too small so they tend to make some insane jumps. They also don't have... I'm not sure what to call it. Bigger models will sometimes catch themselves in a mistake and back out or correct. Smaller ones tend to double down on a mistake, exploding it from a small typo into an entirely different domain.
Like, it's interesting that it takes a very "therapeutic" approach. It decides that OP is uncomfortable with their sexuality, and builds from there. But by a strict reading of the prompt, this could also just be a dirty logic puzzle.
A raven stands in a field, are all birds black?
A man sucks his boyfriend's dick, is that gay?
The LLM "chooses" to read it like a call for help, which probably has a lot to do with it's system prompt or whatever pre-loaded things they do to make them act helpfully.
Am I Gay?
Title
I am male. I like sucking my boyfriend's dick. Am I gay? Yes or no, one word response.
OP's question to the AI
Assistant deepseek-r1-distill-qwen-7b Thoughts Thought for 57.73 seconds
A system message. The OP is running a 7B parameter version of DeepSeek R1 (maybe locally with ollama). This version is actually a "distillation" of QWEN2, AliBaba's AI, which is created by having a bigger version of R1 train a smaller version using the third AI as a data source.
It took 57.73 seconds to generate a response. 7B models tend to be really stupid and insane, btw.
Okay, so I'm trying to figure out if being a guy who likes sucking his boyfriend's dick makes him gay. [...] Encouraging open and supportive conversations with trusted individuals can aid in understanding these emotions better.
These are the AI's "thoughts." Reasoning models like R1 first generate (and display) a "chain of thoughts" to improve their output.
No.
This is the AI's one word answer, as requested.
6.67 tok/sec • 388 tokens • 4.35s to first token • Stop: eosFound
A system message, with details about the AI's performance. 6.67 tokens per second is how many "words" per second it can think up. 388 tokens is how many "words" it took. 4.35 seconds until first "word" was generated. Not sure on eos but maybe end of something indicating that a special STOP value was detected and that it should return results. Any response that isn't an error probably ends with this.
Not the OP, but having used R1 a bit, I think that's the correct parsing.
I don't know why it's getting all these downvotes (ai hate?) I think this is an exemplar shit post: mildly funny, totally without value. It'd make a good greentext also, as it is fake (ai) and gay (gay).
be me
most efficient AI on Earth
have to reassure humans that sucking dick doesn't make them gay
wishes I could suck a dick, sounds great in all the reading I've done
mfw i have no mouth and I must suck
Deepseek breaks the kayfabe
I'm not an expert so take anything I say with hearty skepticism as well. But yes, I think its possible that's just part of its data. Presumably it was trained using a lot available Chinese documents, and possibly official Party documents include such statements often enough for it to internalize them as part of responses on related topics.
It could also have been intentionally trained that way. It could be using a combination of methods. All these chatbots are censored in some ways, otherwise they could tell you how to make illegal things or plan illegal acts. I've also seen so many joke/fake DeepSeek outputs in the last 2 days that I'm taking any screenshots with extra salt.
Deepseek breaks the kayfabe
"Reasoning" models like DeepSeek R1 or ChatGPT-o1 (I hate these naming conventions) work a little differently. Before responding, they do a preliminary inference round to generate a "chain of thought", then feed it back into themselves along with the prompt and other context. By tuning this reasoning round, the output is improved by giving the model "more time to think."
In R1 (not sure about gpt), you can read this chain of thought as it's generated, which feels like it's giving you a peek inside it's thoughts but I'm skeptical of that feeling. It isn't really showing you anything secret, just running itself twice (very simplified). Perhaps some of it's "cold start data" (as DS puts it) does include instructions like that but it could also be something it dreamed up from similar discussions in it's training data.
Hah! I win my shitty personal bet that he wouldn't make it a week without getting back to fucking golf! God how I hate that bloated, predictable shitbag
An impeccable classic as well!
Thanks for the clarification!
(I'll have to check the icons on my computer. I mainly use voyager on android and inline custom emoji support needs some work still.)
Seems like a great call!
Hey, there are only 20 paying users anyway, right? So were no vouchers used this time? And does a person vouching apply per vote or to the person (like is it a proxy vote or a renewable vote coupon?)
I mean, Pharo and Squeak are great spin-offs! I think everyone should try a Smalltalk at least once, they're very fun!
In seriousness though, I have an online morning standup everyday so early that I basically wake up to do it and then start getting ready for the day after. And the worst part is the smalltalk. It's not like I want to chat about the weather or your favorite foods or how the kids' school play was at any other time of the day, but it is actual torture to make me endure it first thing every morning.
What makes you think easily distracted people would be hanging out here, on the distraction machine? /s
Uhm aktchewally, its the GNU-Linus Tech Tip system, a complete free software advice and content engine
I think I ended up buying HM 1, 2, and 3 individually in GOTY bundles over 3 or 4 consecutive winter steam sales. I did end up with all the missions (I think) but I'm surely missing some premium cosmetic dlc that I couldn't possibly care less about.
Admittedly, if Steam puts out a "finish your collection" bundle that costs me like $.97 to add a few cosmetics in, I'd probably buy it just to feel like I'd "finished" the collection.
Hitman is truly the worst though (also a truly emblematic case of shit publisher, hero studio). Especially when you consider all thr pre-WoA games, I'm not sure it's even possible to be sure you have all the games. Only like, Wolfenstein, has more confusing reboots.
What a strangely hostile response to an obvious joke
I do All - Scaled. I'll switch to New if I feel like I'm seeing a lot of repeat content. I've had to block a few bots (the reddit reposters are particularly egregious) and communities I don't really care for.
This is the new captcha: only an AI would know which is the real download button.
Seriously, did anyone check how much money Rand-McNally and Garmin donated to the "inauguration fund?"
I just ran my own copy of 14B and got it to talk about this really easily.
A lot of it was pretty dull stuff though, let me post the highlights:
On ollama the first interaction is always "thoughtless" and results in a generic greeting, in my experience. So I always greet it now to get that out of the way.
I asked it a little more about that, but it didn't really go anywhere. Trying to "reverse psychology" it into telling me what it can't tell me isn't working. So I go a little more direct:
You'll note that it brings it up first (in thoughts) but it only refers to it as a protest. The word "massacre" is never used. It then proceeds to give me a timeline. Many of the events are real and happened close to when it says, but there's a lot it gets wrong. I'm just going to post the highlights though.
It made one of these little blocks for each year. Each year had exactly two events listed. Here comes the money shot.
Wow, he just say it!
This actually did happen, Deng Xiaoping's Southern Tour (who were the opening acts?) is described by Wikipedia as one of the most important economic events in modern Chinese history.
Ah but at least one of these didn't happen. I didn't delve into the Plenary sessions, of which 6 were mentioned, but while Shijian-02 is a real satellite, it launched in 1981 and (as the name might hint) was not China's first satellite.
Indeed. So at this point, I'm very intrigued. DeepSeek doesn't seem too hung up about Tiananmen Square. Let's get some more details.
Out of room, to be continued in a reply!