Oh god
Zozano @ Zozano @aussie.zone Posts 6Comments 663Joined 2 yr. ago

Fuck yeah, I'm moving to New Zealand!
I thought I would never need to see this meme format again...
Fair assessment. Though I didnt go as far as to assert that it 'was' all bullshit - and is the reason I prefaced my comment with an admission of ignorance.
In any case, I'm convinced that my friend was not doing it right, either by his own failure to understand, or the lack of adequate instruction during guided meditation, because he didnt seem to have any meaningful insight into his own mind - beyond having a better imagination, which I suppose does translate to a more creative mind in general.
In addition, he didnt comprehend the idea of being able to 'drop in' to a meditative state when not actively practicing. After introducing him to mindfulness, he found it far more insightful and beneficial in general.
I talk like a guy who read a pop psychology book? That's very judgmental. I did my best to articulate my thoughts and you arrogantly claim your own response is better, even though the court of public opinion regards my explanation as preferable.
You claim meditation is helpful for focusing attention, but this reply is the first which isnt riddled with grammatical or structural errors. You dont need flowery language to describe your sensations.
As for the dichotomy between drugs and meditation, it all depends what metric you're evaluating. The ones aforementioned in my comment (which you've reduced to 'getting high') are the metrics I've used, but doesn't encompass the entire spectrum of drug use. There are ways to compare them, and way they're different - it's a very narrow perspective to simply claim that one is just a more extreme version, or that the other is 'better'.
It might be more beneficial for some people to think of 'meditation' as 'exercise'.
If someone says they've exercised, we dont automatically assume they've lifted weights, or done cardio, or stretches; we know how broad this term is.
One of my friends did 'meditation' during his karate days, but failed to understand a lot of basics around the science focused practices like mindfulness.
Turns out his dojo was practicing zen meditation, which involves trying to illicit vivid imagery in the mind (according to him).
Now, I dont know a lot about zen-meditation, maybe they did it as a cultural thing, but from what he was able to tell me, it sounded like a whole lot of junk mind flailing.
Let’s break this down: You’re essentially saying that paying attention to something is how we experience reality. Well, no kidding. If you pay attention to something, you’re going to notice it more. But that's not some grand, cosmic revelation. That’s just basic human perception.
I think there's a bit of overcomplication here. Yes, meditation involves focusing attention, but describing it as the "axis of your reality" is a bit much. The basic idea is that by concentrating, we become more aware of certain things, which does influence our experience. That’s a simple process, not some deep philosophical mystery.
The "wings" analogy also feels like an attempt to make meditation sound more magical than it really is. Meditation is a way to help focus the mind, find calm, and possibly gain insight. But it's not about discovering some hidden set of "wings" or some grand spiritual power. It’s just a practice for mental clarity.
As for the comparison to drugs, both meditation and drugs alter consciousness, but in different ways. Drugs can give an intense experience, while meditation tends to offer a slower, more controlled shift in awareness. Saying that drugs are weak because they’re like a “dumb machine” doesn’t really capture the complexity of either experience. Both have their place, and both can have benefits, depending on what someone’s looking for.
In short, meditation isn’t some mystical or supernatural process, it’s about training attention in a specific way. The real value comes from consistency and practice, not some grand revelation.
Edit: also, bold of you to assume my experiences are scant, and born of conventional thought - when you have no way of actually understanding what experiences I've had.
It's evident that your experiences with meditation aren't sufficient to counter your hubris.
Meditation is essentially a self-imposed flow state; an artifact of consciousness reflecting extreme focus. It's akin to a runners high. Its features include ego dissolution, a distorted sense of time, reduced perceptions of pain, and feelings of bliss.
This is normally due to the release of neurotransmitters - dopamine, serotonin, endorphins and GABA, the same chemicals affected by common recreational drugs.
These features are regrettably short-cut with drug use. With training, these states of consciousness can be attained without any downsides (barring destabilizing intuitive realizations like free will being an illusion), though at the cost of not being quite as powerful as drugs.
Think of it this way, meditation is like pouring happy juice on your brain slowly. Taking drugs is like placing the bottle on your head and smashing it with a hammer - sure, you're going to get a lot of happy juice on your brain, but the glass might make it unbearable, you have no choice when it ends, and the next day you're going to be forced to pick the shards of glass out.
Weird analogy I suppose, but it helps to illustrate why OP might prefer the slow drip.
At the end of the day, there's no debate about whether meditation can produce these feelings - it's simply a matter of whether a person has the time and interest to seek these things out, or whether they want to flood their brains with happy juice.
Personally, I live in both camps; I've had profound realizations about my own mind while meditating, but I also like getting zonked off my gourd.
Lol OP is actually right but not explaining it well in the comments.
The Fap Door
You're not actually disagreeing with me, you’re just restating that the process is fallible. No argument there. All reasoning models are fallible, including humans. The difference is, LLMs are consistently fallible, in ways that can be measured, improved, and debugged (unlike humans, who are wildly inconsistent, emotionally reactive, and prone to motivated reasoning).
Also, the fact that LLMs are “trained on tools like logic and discourse” isn’t a weakness. That’s how any system, including humans, learns to reason. We don’t emerge from the womb with innate logic, we absorb it from language, culture, and experience. You’re applying a double standard: fallibility invalidates the LLM, but not the human brain? Come on.
And your appeal to “fuck around and find out” isn't a disqualifier; it’s an opportunity. LLMs already assist in experiment design, hypothesis testing, and even simulating edge cases. They don’t run the scientific method independently (yet), but they absolutely enhance it.
So again: no one's saying LLMs are perfect. The claim is they’re useful in evaluating truth claims, often more so than unaided human intuition. The fact that you’ve encountered hallucinations doesn’t negate that - it just proves the tool has limits, like every tool. The difference is, this one keeps getting better.
Edit: I’m not describing a “reasoning model” layered on top of an LLM. I’m describing what a large language model is and does at its core. Reasoning emerges from the statistical training on language patterns. It’s not a separate tool it uses, and it's not “trained on logic and discourse” as external modules. Logic and discourse are simply part of the training data; meaning they’re embedded into the weights through gradient descent, not bolted on as tools.
No, I’m specifically describing what an LLM is. It's a statistical model trained on token sequences to generate contextually appropriate outputs. That’s not “tools it uses", that is the model. When I said it pattern-matches reasoning and identifies contradictions, I wasn’t talking about external plug-ins or retrieval tools, I meant the LLM's own internal learned representation of language, logic, and discourse.
You’re drawing a false distinction. When GPT flags contradictions, weighs claims, or mirrors structured reasoning, it's not outsourcing that to some other tool, it’s doing what it was trained to do. It doesn't need to understand truth like a human to model the structure of truthful argumentation, especially if the prompt constrains it toward epistemic rigor.
Now, if you’re talking about things like code execution, search, or retrieval-augmented generation, then sure, those are tools it can use. But none of that was part of my argument. The ability to track coherence, cite counterexamples, or spot logical fallacies is all within the base LLM. That’s just weights and training.
So unless your point is that LLMs aren't humans, which is obvious and irrelevant, all you've done is attack your own straw man.
I do understand what an LLM is. It's a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it's not sentient and doesn't “think,” and doesn’t have beliefs. That’s not in dispute.
But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn't about thinking in the human sense, it's about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.
Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.
You're worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can't discover bacteria because they don’t know what they're looking at.
So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.
Right now, the capabilities of LLM's are the worst they'll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we're at least 90% of the way there.
The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.
You don't blame the tool, you blame the user. LLM's are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.
I'm curious as to what you regard as a better tool for evaluating truth?
Period.
What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. Tweak it a little, and Gpt is more consistent and applies reasoning patterns that outperform the average human by miles.
Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn't “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.
So yes, it can evaluate truth. Not perfectly, but often better than the average person.
I'm good enough at noticing my own flaws, as not to be arrogant enough to believe I'm immune from making mistakes :p
Granted, it is flakey unless you've configured it not to be a shit cunt. Before I manually set these prompts and memory references, it talked shit all the time.
Actually, given the aforementioned prompts, its quite good at discerning flaws in my arguments and logical contradictions.
I've also trained its memory not to make assumptions when it comes to contentious topics, and to always source reputable articles and link them to replies.
I often use it to check whether my rationale is correct, or if my opinions are valid.
The first time I saw this post: sunglasses
The second time: 375mL can of beer.
The third time: my cat.