What's the deal with male loneliness?
GamingChairModel @ GamingChairModel @lemmy.world Posts 1Comments 634Joined 2 yr. ago
Again not talking about the main issue that every men that feel alone will tell you as the root of their problem:
-Lack of a relationship.
-Lack of friendships due other friends being invested in their relationships.
Actually, your comment touches on something that is really interesting to me, and a major part of where you and I differ on what male loneliness means. You've elevated the romantic committed relationship with a woman as the primary means by which men are expected to derive social standing and stability, but I view it primarily as an issue of friendships, mainly friendships with other men. The loneliness problem, in my view, comes from men being unable to form strong relationships with other men, and a wife or girlfriend or whatever is secondary to that.
Maybe it's because I've always had stability in my friendships but didn't have committed romantic relationships until my 30's, but it seems like the problem of loneliness comes from not feeling like you have people in your corner (friends, family, even work colleagues), but I think focusing on sexual and romantic relationships is itself isolating and lonely, even for men who do get married. Now that I'm married I still spend plenty of time with my friends, married or single, based on the topic/activity/interest that ties us together.
Because plenty of men who do not comply to gender norms or toxic masculinity (or masculinity at all) still feel alone. And their experience get invalidated by this explanation.
It sounds like you completely miss the application of the explanation itself. The phrase toxic masculinity describes the social norms and expectations that men act a certain way. Society imposes gender norms on people such that those who don't comply are at the highest risk of being shunned or ostracized, and having trouble making social connections. And the social pressure may make men act in ways they wouldn't otherwise, so that they grow up poorly equipped to be introspective and understand their own wants/desires/emotions/drives/motivations.
Toxic masculinity tells men what they're not allowed to be, and tells men what they must be. Both sides of that same coin are toxic to men, and by extension those that the men interact with.
Way more likely that it turns into half of the people we know not even using desktop computing anymore.
if we highly restrict the parameters of what information we’re looking at, we then get a possible 10 bits per second.
Not exactly. More the other way around: that human behaviors in response to inputs are only observed to process about 10 bits per second, so it is fair to conclude that brains are highly restricting the parameters of the information that actually gets used and processed.
When you require the brain to process more information and discard less, it forces the brain to slow down, and the observed rate of speed is on the scale of 5-40 bits per second, depending on the task.
You can still brute force it, which is more or less how back propagation works.
Intractable problems of that scale can't be brute forced because the brute force solution can't be run within the time scale of the universe, using the resources of the universe. If we're talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it's not a real solution.
I think the fundamental issue is that you're assuming that information theory refers to entropy as uncompressed data but it's actually referring to the amount of data assuming ideal/perfect compression.
Um, so each character is just 0 or 1 meaning there are only two characters in the English language? You can't reduce it like that.
There are only 26 letters in the English alphabet, so fitting in a meaningful character space can be done in less than 5 bits (2^5 = 32). Morse code, for example, encodes letters in less than 4 bits per letter (the most common letters use fewer bits, and the longest use 4 bits). A typical sentence will reduce down to an average of 2-3 bits per letter, plus the pause between letters.
And because the distribution of letters in any given English text is nonuniform, there's less meaning per letter than it takes to strictly encode things by individual letter. You can assign values to whole words and get really efficient that way, especially using variable encoding for the more common ideas or combinations.
If you scour the world of English text, the 15-character string of "Abraham Lincoln" will be far more common than even the 3-letter string of "xqj," so lots of those multiple character expressions only convey a much smaller number of bits of entropy. So it might be that it takes someone longer to memorize a random 10 character string that is truly random, including case sensitivity and symbols and numbers, than it would to memorize a 100-character sentence that actually carries meaning.
Finally, once you actually get to reading and understanding, you're not meticulously remembering literally every character. Your brain is preprocessing some stuff and discarding details without actually consciously incorporating them into the reading. Sometimes we glide past typos. Or we make assumptions (whether correct or not). Sometimes when tasked with counting basketball passes we totally miss that there was a gorilla in the video. The actual conscious thinking discards quite a bit of the information as it is received.
You can tell when you're reading something that is within your own existing knowledge, and how much faster it is to read than something that is entirely new, on a totally novel subject that you have no background in. Your sense of recall is going to be less accurate with that stuff, or you're going to significantly slow down how you read it.
I can read a whole sentence with more than ten words, much less characters, in a second while also retaining what music I was listening to, what color the page was, how hot it was in the room, how itchy my clothes were, and how thirsty I was during that second if I pay attention to all of those things.
If you're preparing to be tested on the recall of each and every one of those things, you're going to find yourself reading a lot slower. You can read the entire reading passage but be totally unprepared for questions like "how many times did the word 'the' appear in the passage?" And that's because the way you actually read and understand is going to involve discarding many, many bits of information that don't make it past the filter your brain puts up for that task.
For some people, memorizing the sentence "Linus Torvalds wrote the first version of the Linux kernel in 1991 while he was a student at the University of Helsinki" is trivial and can be done in a second or two. For many others, who might not have the background to know what the sentence means, they might struggle with being able to parrot back that idea without studying it for at least 10-15 seconds. And the results might be flipped for different people on another sentence, like "Brooks Nader repurposes engagement ring from ex, buys 9-carat 'divorce ring' amid Gleb Savchenko romance."
The fact is, most of what we read is already familiar in some way. That means we're actually processing less information than we're actually taking in, and discarding a huge chunk of what we perceive towards what we actually think. And when we encounter things that didn't necessarily expect, we slow down or we misremember things.
So I can see how the 10-bit number comes into play. It cited various studies showing the image/object recognition tends to operate in the high 30's in bits per second, and many memorization or video game playing tasks involve processing in the 5-10 bit range. Our brains are just highly optimized for image processing and language processing, so I'd expect those tasks to be higher performance than other domains.
How does the updater get updated, though?
But if you read the article, then you saw that the author specifically concludes that the answer to the question in the headline is "yes."
This is a dead end and the only way forward is to abandon the current track.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
They did! Here's a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.
This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
The paper gives specific numbers for specific contexts, too. It's a helpful illustration for these concepts:
A 3x3 Rubik's cube has 2^65 possible permutations, so the configuration of a Rubik's cube is about 65 bits of information. The world record for blind solving, where the solver examines the cube, puts on a blindfold, and solves it blindfolded, had someone examining the cube for 5.5 seconds, so the 65 bits were acquired at a rate of 11.8 bits/s.
Another memory contest has people memorizing strings of binary digits for 5 minutes and trying to recall them. The world record is 1467 digits, exactly 1467 bits, and dividing by 5 minutes or 300 seconds, for a rate of 4.9 bits/s.
The paper doesn't talk about how the human brain is more optimized for some tasks over others, and I definitely believe that the human brain's capacity for visual processing, probably assisted through the preprocessing that happens subconsciously, or the direct perception of visual information, is much more efficient and capable than plain memorization. So I'm still skeptical of the blanket 10-bit rate for all types of thinking, but I can see how they got the number.
I mean: look at an image for a second. Can you only remember 10 things about it?
The paper actually talks about the winners of memory championships (memorizing random strings of numbers or the precise order of a random arrangement of a 52-card deck). The winners tend to have to study the information for an amount of time roughly equivalent to 10 bits per second.
It even talks about the guy who was given a 45 minute helicopter ride over Rome and asked to draw the buildings from memory. He made certain mistakes, showing that he essentially memorized the positions and architectural styles of 1000 buildings chosen out of 1000 possibilities, for an effective bit rate of 4 bits/s.
That experience suggests that we may compress our knowledge by taking shortcuts, some of which are inaccurate. It's much easier to memorize details in a picture where everything looks normal, than it is to memorize details about a random assortment of shapes and colors.
So even if I can name 10 things about a picture, it might be that those 10 things aren't sufficiently independent from one another to represent 10 bits of entropy.
The problem here is that the bits of information needs to be clearly defined, otherwise we are not talking about actually quantifiable information
here they are talking about very different types of bits
I think everyone agrees on the definition of a bit (a binary two-value variable), but the active area of debate is which pieces of information actually matter. If information can be losslessly compressed into smaller representations of that same information, then the smaller compressed size represents the informational complexity in bits.
The paper itself describes the information that can be recorded but ultimately discarded as not relevant: for typing, the forcefulness of each key press or duration of each key press don't matter (but that exact same data might matter for analyzing someone playing the piano). So in terms of complexity theory, they've settled on 5 bits per English word and just refer to other prior papers that have attempted to quantify the information complexity of English.
The Caltech release says they derived it from "a vast amount of scientific literature" including studies of how people read and write. I think the key is going to be how they derived that number from existing studies.
Speaking which is conveying thought, also far exceed 10 bits per second.
There was a study in 2019 that analyzed 17 different spoken languages to analyze how languages with lower complexity rate (bits of information per syllable) tend to be spoken faster in a way that information rate is roughly the same across spoken languages, at roughly 39 bits per second.
Of course, it could be that the actual ideas and information in that speech is inefficiently encoded so that the actual bits of entropy are being communicated slower than 39 per second. I'm curious to know what the underlying Caltech paper linked says about language processing, since the press release describes deriving the 10 bits from studies analyzing how people read and write (as well as studies of people playing video games or solving Rubik's cubes). Are they including the additional overhead of processing that information into new knowledge or insights? Are they defining the entropy of human language with a higher implied compression ratio?
EDIT: I read the preprint, available here. It purports to measure externally measurable output of human behavior. That's an important limitation in that it's not trying to measure internal richness in unobserved thought.
So it analyzes people performing external tasks, including typing and speech with an assumed entropy of about 5 bits per English word. A 120 wpm typing speed therefore translates to 600 bits per minute, or 10 bits per second. A 160 wpm speaking speed translates to 13 bits/s.
The calculated bits of information are especially interesting for the other tasks (blindfolded Rubik's cube solving, memory contests).
It also explicitly cited the 39 bits/s study that I linked as being within the general range, because the actual meat of the paper is analyzing how the human brain brings 10^9 bits of sensory perception down 9 orders of magnitude. If it turns out to be 8.5 orders of magnitude, that doesn't really change the result.
There's also a whole section addressing criticisms of the 10 bit/s number. It argues that claims of photographic memory tend to actually break down into longer periods of study (e.g., 45 minute flyover of Rome to recognize and recreate 1000 buildings of 1000 architectural styles translates into 4 bits/s of memorization). And it argues that the human brain tends to trick itself into perceiving a much higher complexity that it is actually processing (known as "subjective inflation"), implicitly arguing that a lot of that is actually lossy compression that fills in fake details from what it assumes is consistent with the portions actually perceived, and that the observed bitrate from other experiments might not properly categorize the bits of entropy involved in less accurate shortcuts taken by the brain.
I still think visual processing seems to be faster than 10, but I'm now persuaded that it's within an order of magnitude.
St Nicklaus
The golfer?!?
Seriously. I'm not at all an art guy so I feel qualified to observe that The Scream is probably one of the top 5 (and definitely top 10) most well known paintings, somewhere shortly after Da Vinci's Mona Lisa and Van Gogh's Starry Night.
Yeah, there have been cases of people dealing with the bureaucratic nightmare that followed when they got vanity license plates that said "NULL" and a bunch of bad program logic combined with incomplete data in the databases to send them a bunch of tickets.
Making it so that people can take advantage of even more complex computer errors could ruin things for other people.
No advertisement
You don't think that commercial products can't get good (or bad) coverage in a place like this? In any discussion of hardware, software (including, for example, video games), cars, books, movies, television, etc., there's plenty of profit motive behind getting people interested in things.
There are already popular and unpopular things here. Some of those things are pretty far removed from a direct profit motive (Linux, Star Trek memes, beans). But some are directly related to commercial products being sold now (current video games and the hardware to run them, specific types of devices from routers to CPUs to televisions to bicycles or even cars and trucks, movies, books, etc.).
Not to mention the political motivations to influence on politics, economics, foreign affairs, etc. There's lots of money behind trying to convince people of things.
As soon as a thread pops up in a search engine it's fair game for the bots to find it, and for that platform to be targeted by humans who unleash bots onto that platform. Lemmy/Mastodon aren't too obscure to notice.
Might not need to even have much new mining.
Gallium is primarily extracted from bauxite, which is already mined worldwide for aluminum processing. So with gallium being a very small byproduct of aluminum processing from mined bauxite, the bottleneck probably isn't in the mines, because mining and processing bauxite is already something many countries do. It's just not always economically profitable to further process the gallium at the same time, but if the need is there, that can be ramped up at existing aluminum plants.
It's not an overnight process but with many elements, the limiting factor isn't actual rarity, but the high energy/equipment needs of the process to extract and purify the element, and the high amounts of waste produced.
That's what I'm talking about, though. You see male friendships as a method of coping with a more fundamental problem relating to women, and I totally disagree, and argue that healthy male friendships are social connections worth developing and maintaining in their own right, whether you are or aren't in a committed relationship with a woman. Even your framing of why male friendships fall apart involves women. It's the centrality of women in your worldview that is preventing you from seeing how male friendships are a critical thing to have in addressing male loneliness.
Put another way, married men need healthy male friendships, too. Putting all of that emotional labor into a single link with a woman is fragile and unreliable, and I'd argue inherently unhealthy. People need multiple social links and the resilience and support that comes from whole groups connected in a web, not just a bunch of isolated pairings.
And to be clear, I'm not saying that friendships are a replacement for romantic and sexual relationships. I'm saying that social fluency, empathy, and thoughtfulness necessary for being able to maintain deep friendships are important skillsets for maintaining romantic relationships as well. The lack of romantic partners, then, isn't the "base issue," but is a symptom of the internal state of the person and how that person interacts with the world.
So I maintain that your worldview switches cause and effect, at least compared to mine. And maybe I'm wrong, and I'm not trying to convince you that I'm right. I'm bringing all this up to share that the surprising part of this line of comments is that I was genuinely not expecting someone to treat romantic difficulties as a primary or fundamental cause of male loneliness. To show you that at least there are other people who view these issues very differently from you, and that there's a broad diversity of thought on the topic.