Skip Navigation

Gaywallet (they/it)
Gaywallet (they/it) @ Gaywallet @beehaw.org
Posts
214
Comments
768
Joined
3 yr. ago

  • It's rather trivial to find a study talking about BMI, but talking about it in extremes like this does no one any good. I would highly suggest you go educate yourself on public health or at least read something in the literature before making such extreme claims. To help you get started, here's a fairly comprehensive review on BMI in the clinical context.

    You do bring up a good point in that it's important how we use BMI and just what it represents. Major institutions such as the AMA have started to reassess exactly how BMI is interpreted (and providing guidelines) in the clinical sense, because there are problematic ways to use BMI. Of note, they do not advocate against using BMI, but rather it should be one of many indicators, as that's the basis of differential diagnosis in the first place.

  • We're not a space for low effort reddit/twitter style gotchas. Be better.

  • Gonna stop the conversation here, this isn't going anywhere. In order to operate in good faith you need to be willing to put some thought and effort into your comments. A vague "America bad" statement as a criticism of your vague "America is fat" is completely justified. Back and forth bickering isn't going to help, please disengage.

  • The discussion trying to process your extremely vague statement is not productive.

  • Being overweight is uncomfortable, limiting, and can be a burden on people around you

    While I am not disagreeing in any way, I believe it's important to point out that there's also a distinct difference between obese and overweight. Often times overweight is being used as an adjective to indicate that someone is outside the normal weight range, but in the context of medicine and the context of this article, it's a range of BMI values between the normal and obese categories.

    Quality of life measures generally find little to no negative effects with the overweight category, but decrease as you continue into obese categories.

  • It’s my understanding that people want to get away from BMI because it’s crude

    Pretty much the only people advocating for this are people who get into weightlifting and I'd say the vast majority of them were already in the overweight category before putting on extra muscle. BMI is by no means perfect, but it's actually extremely good at doing what it was designed to do, which is give a quick and easy metric by which to judge someone's general health. It's meant to be a starting point for a discussion around exercise and other more important factors, when it's clinically relevant to do so.

  • While I appreciate your concern for how fat America is, I'm struggling to see how this comment is helpful or leads to a productive discussion in any way 🤷‍♀️

  • This is about the overweight BMI category, not obese categories. It's also talking about how it's actually not associated with an increase in overall mortality, but rather the opposite. This observation has been around in literature for quite some time, predating the obesity crisis.

    What are you trying to even say with this comment?

  • Not the case at all, enjoy a 14 day ban. Try thinking about what you did wrong.

  • Tip?

    Jump
  • Blunt axe, sword, and insufficient drop height leading to death by suffocation instead of neck snapping all have absolutely nothing to do with guillotines and instead have to do with beheading/death by axe/sword and hanging. The article very explicitly says that this kind of tipping did not happen (or was extremely rare).

    You are correct that the article talks about people tipping the guillotine operator, in the specific context of "certain eras in England", which implies it was neither widespread nor applicable to anything outside of that very specific context.

  • Tip?

    Jump
  • Right, to be clear I wasn't saying it didn't happen, just that it wasn't customary. I don't think it's fair to say that the practice was as widespread as the comment implies.

  • Huge news! Very cool to see. Stem cells are wild, can't wait to keep seeing all the awesome applications.

  • I'm merely explaining why it is not analogous and why attraction cannot be considered bigoted. Anything that involves intent can be criticized for bigotry if it is present.

  • Someone who prefers lady types as sexual partners may prefer to look at cheesecake pics of lady types, I guess, and that’s technically sexist because they’re choosing those pics based on lady characteristics.

    We have no control over who we are attracted to sexually (or not at all), but we do have control over how we interact with the world. Who you are attracted to cannot be sexist, racist, etc. because there is no intention - it merely is. Being attracted and choosing to objectify someone are two very distinct processes because one involves intention. Discrimination is also an act of intent.

  • Jumping into the feminism community to challenge a fairly core tenant of feminism is a bad take. I'm removing this because it was a comment made clearly in bad faith. You're expected to be nice on our instance, do better in the future.

  • Feel free to petition the lemmy developers (post in lemmy support or open a github report) to add this functionality if it's something you feel strongly about.

  • In the future, please do not editorialize titles unless the title is clickbait and you're providing a title which isn't.

  • AI is not a tool that is going to disappear. Like all tools, finding out where it can be useful in your life or career and putting it to use is a good thing. Just like any other tool, it might not be particularly useful in your life - many of us learn how to use a calculator in school, yet hardly use them. Similarly, we may learn to use a hammer yet rarely find a reason to. With something like AI, however, I can see a lot of potential uses in the future and also a slow expanding of how many tools are built upon AI as a foundation. Sensing this early and helping those in education to understand where and when it's appropriate to use AI is a useful thing to be teaching children, much like teaching basic computer skills.

    Thinking of AI on a single dimension and using that to argue for its relevance in the classroom is a major oversight. AI is to be combined with your own thinking, in the same way that a calculator is. Telling AI to write the paper for you may result in a factual paper when it deals with something as monumental as the example of MLK, but in many cases as the first professor points out, will actually result in something full of errors and biases. Rather than thinking about this in black and white and advocate explicitly for/against something like ChatGPT, the professor should be advocating for its use as something more akin to a partner - someone you work with to get a finished product. ChatGPT can certainly refine the words you give it to make them easier to read or to have a more coherent thought process. ChatGPT can, at least in some circumstances, direct you towards additional resources to research which might help to bolster whatever you are writing.

    The article brings up AI assisted plagiarism, which I think is an important concept to bring up. AI itself sits in a weird space right now in that it is trained off the words and minds of others without appropriately citing them. It also has a lot of issues with hallucinations which make them particularly problematic when you're looking for truth. The idea that the use of a tool would rob you of your ability to think at a higher level is just as misplaced as math teachers believing that calculators prevent people from conceptually understanding what is going on - it's simply incorrect. While the research does not currently exist to see if the use of AI alongside human thinking is problematic, I suspect we will see a similar outcome. Learning to use tools to accomplish a task do not make us dumber, and in fact enable us to do things with increased efficiency and skill. Speaking to any tradesman who learns to use a bunch of different tools should be more than enough to convince you that they know what they are doing and that their tools are not holding them back from knowledge in any meaningful way.

    We should be cautious given the above. AI is not omniscient nor is it perfect. It makes things up, its prone to bias, and we can let it completely automate tasks for us. We should be careful to understand how and when it's appropriate to employ. When we take our own thinking out of the equation or we do not understand what AI brings to the table, this is where AI can be harmful. When we employ AI to accomplish tasks without human oversight or intervention it can be extremely problematic. Letting AI make decisions about child welfare for example, is not a good idea. But its a far cry from using AI to help us put our thoughts about a subject into the form of an essay. AI needs to be a thought partner for it to be most useful and to make the most out of this valuable tool we need to find a way to incorporate that into our schools and lives alongside some basic understanding of how these tools work.

  • If someone deletes their posts or comments they are simply gone. I'm sure many privacy minded folks around here feel strongly about this staying precisely the way things are, so I wouldn't expect any changes.