Two brands suspend advertising on X after their ads appeared next to pro-Nazi content
Gaywallet (they/it) @ Gaywallet @beehaw.org Posts 214Comments 768Joined 3 yr. ago

cheers m8 đ it happens
reminder to be nice on our instance
Blue light suppresses the release of endogenous melatonin. To get an idea of how much it suppresses, it, take a look at the image below. In this image participants were exposed to no light, or a 2 lux light at 460nm (blue line) or 560nm (green line) for 1.5h. source
As for the mental stimulation part, that's going to vary a lot more from person to person and how engaging the content is, but you're right that being mentally stimulated in the wrong ways can keep you awake too.
This belongs in politics, not technology
Not a strong case for NYT, but I've long believed that AI is vulnerable to copyright law and likely the only thing to stop/slow it's progression. Given the major issues with all AI and how inequitable and bigoted they are and their increasing use, I'm hoping this helps to start conversations about limiting the scope of AI or application.
I'm so sorry, that's beyond fucked up
You have the right to request medical records and they must be returned to you in a timely manner. The doctor can refuse to fill out a form, but you can always just submit the entirety of your health record instead.
Maybe request a copy of your medical records over the entire time-frame and submit it and add to the 3000+ pages of documentation? The SSDI page says "Medical evidence already in your possession. This includes medical records, doctors' reports, and recent test results; and" which seems to indicate that the doctor doesn't need to fill it out. You have the legal right to access to a copy of your records and they're required by law to process these requests.
There's a million different ways, most trans people don't immediately change their last name so you can use that. Or you can refer to something that Emily did to provide context. I'm sure if you stop and think about it, you can figure out a few more ways too.
At the end of the day it's not about making it easier for you to understand what's going on at the expense of disrespecting her. She's asking for you to use her name, so stop calling her something else.
If you ever experience this I highly encourage you to file a HIPAA complaint. They take this very seriously and minimum violations are steep. If you have the time and energy, please bring this to the attention of the clinic or facility you visited - I guarantee you there are staff who are in complete agreement with you and would be furious if this were true.
which would be illegal to sell or share with anyone other than a patientâs doctors
Assuming you are in the US, this is already the case. HIPAA is incredibly strict.
I see a lot of people talking about how this is an issue of capitalism through the eyes of who purchases (c-suite) the electronic health record or EHR. This isn't really applicable when it comes to healthcare delivery systems. Every system has c-suite representation from the clinical side. CMO, CNO, CMIO, CNIO, etc. In addition physicians have strong lobbying power within the orgs to ensure that they are listened to.
Ultimately trade-offs need to be made somewhere, and the real issue is that these pieces of software are incredibly complicated. Have you ever stopped to consider how much information might be in your medical chart, for a single doctors visit? Prior to the visit they need to have or collect a bunch of customer data on you - name, date of birth, insurance info, etc. They need to schedule an appointment for you at a location with a specific doctor which means they need a calendaring and scheduling system and all the data that comes along with that. They may need to collect and scan documents about you, or get information from other medical systems. Then when you show up, you interact with more people than just your doctor - you get checked in, they collect a form of payment or the actual payment itself (meaning they need to interface with insurance to understand what to bill), then a nurse or medical assistant takes you back. A bunch of vitals get recorded - height, weight, blood pressure, pulse oximetry. Some of these come from devices which are hooked up to their system. Then the doctor comes in and does any number of things to you- there's a lot of narrative that needs to be collected, pieces of information about why you are there, your history, and so on. They may collect some kind of material from you for which the system needs to at the very least record that it was collected and what a result is (realistically it's interfaced and sometimes the interface includes media such as images). I could keep going on, but I think you get the idea - the amount that needs to go into a system to make it useful to all the various staff at a place of service means that the product is very expensive and difficult to create.
The real issue with capitalism comes in here - it's an issue of very few companies providing good products. It's very difficult to create a competing product in the EHR world because the established giants have been developing for 30+ years. They've poured billions of dollars and man hours into creating software that can manage the extreme complexity of medical care. Even among these giants which do hire clinical professionals to help shape the front end so it's as user friendly as possible, medicine is huge and there are people of all walks practicing - some are great at tech and others not so much. Being able to appease everyone means you need a flexible UX which also means... more money and more man hours. This problem unfortunately can't really fix itself until it's possible to create a complicated system with less resources, which I don't foresee happening anytime soon.
Multimodal AI has been a goal for quite some time - eventually we'll reach a point where a multimodal system represents intelligence reasonably accurately. Approaching this from a technical perspective and trying to define intelligence is, in my opinion, a much more boring conversation than talking more broadly about intelligence.
Most people would agree that most living things are intelligent in some fashion. In particular, if you ask them whether pets can be intelligent (dogs, cats, etc.) people will agree. The more abstract a living being is from our emotions and our society, the less you'll see people term this life as intelligent, with the caveat that bigger life is often considered intelligent before smaller life is. Insects, for example, are often discarded as not intelligent on account of their size. A great example of this is bees, which often find themselves ranked amongst the smartest animals on earth because they exhibit higher level behavior such as that of a choice of non-violence, having emotions and more. We also know that bees communicate to each other with dance, and communication is often considered a higher level of intelligence.
I think where people often get lost is how to compare different modes of intelligence. Historically, we're really bad at measuring this, even in humans. Most people are familiar with the concept of an 'Intelligence Quotient' or IQ, as a measure of how 'smart' someone is. Unfortunately these viewpoints are far from objective; while IQ tests have improved over the years as we've refined out western and educational thinking, they still have serious biases and chronically underestimate intelligence in an inequitable way (minority individuals still score lower than non-minority folks from similar SES and other background factors). It's very difficult to (and one might suggest impossible to) compare and contrast aspects of intelligence such as visual processing with other aspects of intelligence such as emotional intelligence. When we expand this kind of thinking to non-human individuals, how does one create an IQ score when including factors we don't measure in humans such as olfactory intelligence (how well one can identify smell)? If humans have a very low score in comparison with well known discriminators such as canines, how does that factor into an overall score of intelligence? Ultimately we must see that measures of intelligence, while useful, are not objective in a broader scope. We can reasonably accurately compare the visual processing intelligence between two systems, whether we consider them living or not, but we cannot use these to measure intelligence as a whole - otherwise we must consider visual processing AI as intelligent.
This is why I think the most interesting conversation is one about intelligence as a whole. On an IQ test, I could absolutely ace the visual intelligence portion of the test, scoring above 200 points, and then completely fail all the other ones (get nothing correct) and be considered a low IQ human. However, when an AI does it, we don't consider them intelligent. Why is that? Why don't we consider these tests when we speak about animals? Is it because we don't have a good way of translating what we wish to test to a system (animal) which cannot respond with a language we understand? How might this thinking change, if we were to find a way to communicate with animals, or expand our knowledge of their languages? Perhaps in a slight bit of irony the very intelligence we are questioning is providing us answers to long standing questions about the animal kingdom - AI has granted us access to understand animal communication in much more depth, revealing that bats argue over food, distinguish between genders, have names, and that mother bats speak to their babies in an equivalent of âmotherese.â.
It's easy to brush of considerations like this in the context of AI because some people are making arguments which seem inane in the historical context of how we measure intelligence, but I think many of us don't realize just how much we've internalized from society about what "intelligence" means. I would encourage everyone to re-examine the very root itself, the word intelligence, and rather than deferring to existing ontologies for a definition, to consider the context in which these definitions were set and why it may be time to redefine the word itself or our relationship to it.
This is a reminder to be nice on our instance
In comparison they make me look lazy, and I'm paid a lot more than they are. We choose to value weird things in this world, so I hope that I can give back a little with posts like this, and my own financial contributions.
My nesting partner played outside lands on Sunday and I've been busy getting ready for a planned surgery next week which will have me out of office for a bit on recovery so I'm notably exhausted today. While life has been weird and rocky lately, my spirits are actually quite high at this point in time. I'm enjoying life, and looking forward to having some time off work to spin up some projects and spend time and socialize with my loved ones more đ
I wouldnât be surprised if theyâd be worried about the responsibility of such a place, both in terms of modding and I guess legal liability, but it canât hurt to ask, right? Try asking in /c/beehawsupport.
Mental health is one of the few communities that fall into a general category of 'often problematic on the internet' due to a confluence of factors noted already in this post as well as a few not mentioned - namely that people who are not educated can cause serious and real harm to others with bad or misinformed advice. In the same way that you shouldn't ask for legal advice from a random individual, asking for mental health advice online can be fraught with bad responses/answers. At this point in time we're not entertaining the idea.
Tagging @LinkOpensChest_wav@beehaw.org for transparency
Covid opened up more therapists to the idea of online services.
As someone in the medical field, there was a lot of legality issues - deciding where a provider needs to be credentialed when they are not practicing in the state which the patient is in, is a tricky issue. While I'm sure some providers were resistant to telehealth and were forced to get used to it starting in 2020, a lot of resistance was one of practical 'can I legally do this' concern.
Just dropping in here to say that a topic like this is a fraught one asking for conflict. I'm also vaguely unsure what good could come out of it. It's not explicitly not nice and could lead to discussions about things people wish to improve in themselves, but if you want to ask questions like this in the future it may serve you to think about how to frame them in a way that encourages nice behavior.
bro this isn't reddit
you're being repeatedly antagonistic all over beehaw
this is your warning and reminder to be nice