Should I be aware of something when buying a TV?
Eccitaze @ Eccitaze @yiffit.net Posts 4Comments 338Joined 2 yr. ago

Honestly, that's the best way to live IMO. You don't necessarily have to understand something or adopt it for yourself, but it literally costs you nothing to show someone the basic respect of addressing them how they wish to be addressed.
My best man came out as trans a few years after our wedding. It took me a while to get her new name down pat, but every time I messed up I corrected myself immediately. The other day when she sent me a picture to show her progress on HRT I told her how happy I was at the progress my maid of honor had made and how much happier she looked. It's that simple: just call someone by their preferred name/pronouns.
Shy person who still calls out transphobia and stands up for trans folks?
Smart TVs will collect your personal info and viewing habits and send it to the manufacturer of they're given half a chance
Some scummy brands will even configure their TVs to automatically and silently connect to open wifi networks to phone home
I honestly don't think that he's going to be forced out. To be blunt, the people who could force him out are likely the people who pushed for these changes in the first place. They wanted 3rd party apps to die so they could minimize costs caused by API usage, and force more users to the official app so they could put more eyeballs in front of ads while scraping more PII from users to sell. That mission has been fully accomplished. Literally all of the fallout--the protests, the decrease in content moderation due to less mods/worse mod tools, the decrease in content quality from power users leaving, the flood of "fuck /u/spez" on every comment thread, the stream of negative articles on the tech press, all of it--were literally not accounted for because they were considered to be externalities. Just as a chemical company doesn't consider the effects of dumping waste into a nearby river, Reddit didn't consider the effects of alienating the majority of the userbase that were responsible for making reddit what it was.
I just checked and literally every post in the past three days was submitted by just two accounts, with one user accounting for easily 95% of the posts.
Jesus, reddit really is dying.
To be fair, there isn't an example from the Windows store specifically, because the vast, vast majority of Windows programs are installed via standalone installation packages.
But yes, there was one instance where uninstalling a game would recursively delete the parent directory, up to and including potentially deleting the entire C:\ drive.
Nah, you just grabbed whatever shitty articles backed up your existing viewpoint because if you'd bothered to do any actual research you'd have seen that at worst there's a hell of a lot more nuance than the anti-trans bigotry you spewed all over the thread:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9793415/
https://www.healthline.com/health/are-puberty-blockers-reversible
Oh, and please note that my sources are from official government sources and peer-reviewed journals, not transphobic right-wing rags that are pushing agendas.
Let's see...
Right-wing rag, right-wing rag that has the phrase "Biblical truth" in its slogan, right-wing rag pushing vaccine conspiracies and transphobia on its front page, aaand... anti-trans hate group.
Your bias is showing.
FF16 was the first game I preordered in over a decade, and even then I only preordered it after I played the free demo and thoroughly enjoyed it.
On the one hand, I can understand her anger at the party. I personally disagree with her stances, but I can't hate on someone for not wanting to associate with a party that advertises a blank check while saying "all this needs is a primary challenger."
On the other hand... How the fuck can the GOP leadership say welcome her by saying they're a party "where diversity of opinion is welcome" with a straight face?!?!
I worked graveyard shifts at a gas station for a year or two. My general experience beyond what other people have said--good commute, fucking with your social life, taking its toll on your body, all that--is that working graveyard shifts is lonely. I cannot understate how lonely it got; there were stretches of multiple hours where there were no customers at all, and it was just me and the long list of nightly chores I had to do (mopping floors, prepping food for breakfast rush, restocking shelves, etc., etc.). Not having any human contact at all fucks with your head something fierce, especially when you mix in sleep deprivation and your body rebelling against the normal sleep rhythm into the mix.
My advise is that if you're going to be working night shift all alone, get into podcasts. Having a radio that I could use to listen to NPR was the main thing that kept me sane, because I could at least have a human voice to listen to and keep my mind somewhat engaged.
I'm not shifting the goal post--I have been consistent in my position that AI does not truly "learn" in the way that humans do, and is incapable of the comprehension required for actual human creativity. Tay spouting racist rhetoric because that's what was put into it supports that position, if anything; if it were capable of comprehending the language it was being fed, it wouldn't have done that.
You have stated that it's not infringing on copyright to train a model on published works, yes. I wholeheartedly disagree, because, as I have previously stated, AI models as they currently exist cannot produce new, derivative works based off the training model, but only reconstitute the training model together in various different combinations. This is important because one of the requirements for copyright protection, as per the US Copyright Office, is that it's an independent creation, which "means that the author created the work without copying from other works." AI's inability to create its own work without copying from other works means that it cannot produce copyrightable material.
As a result, if you input an infringing dataset into an AI's training model, the resulting output is also infringing, because it is not, and cannot, be transformative to the level required to meet the minimal creativity threshold needed for copyright protection. At best, you can make an argument that the infringement in an AI's output is acceptable under the de minimis doctrine (i.e. that the amount of the copyrighted work contained in an infringing work is so trivial as to not warrant protection). However, my belief is that if a hypothetical composite work takes all of its source material from 100 different copyrighted sources, it wouldn't qualify for de minimis protection because the composite work is 100% infringing, even though each individual source only contributed 1% to the total work.
To summarize, my line of thinking is as follows:
- The specific output of an AI does not in of itself qualify for copyright protection because no human minds were involved in creating it, except for the mind that gave the AI the prompt; however, this involvement is not significant enough to overcome the minimal creativity standard required for copyright protection. This is the position of the US Copyright Office (page 7, The Human Authorship Requirement):
The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Trade-Mark Cases, 100 U.S. 82, 94 (1879). Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work.
- Since the specific output of an AI model lacks any copyright protection, that output does not qualify for any related defenses such as fair use because as these defenses require significant transformative effort of the work in question. If something cannot be transformative, novel, or new enough to qualify for copyright protection in the first place, it's impossible for it to be transformative enough for a fair use defense. It also cannot qualify for copyright protection as a compilation or derivative work, as they both must contain copyrightable subject matter--since the AI output is not copyrightable, they cannot be claimed as either compliations or derivatives.
- As a result, if the training dataset input to an AI model is infringing, then the output of that AI model is also infringing, since the output does not independently qualify for copyright protection, nor can they leverage related defenses.
I’m sorry, but you realize that this doesn’t make any sense right? Large corporations are the ones who would have enough information and/or money at their disposal to train their own AIs without relying on publicized works. Should any kind of blockade be created to stop people training AI models from using public work, you would effectively be taking AI away from the masses in the form of Open Source models, not from those corporations. So if anything, it’s you who is arguing for large corporations to have a monopoly on AI technology as it currently is.
Large corporations and open-source AI models are scraping our IP without consent because they think they can get away with it, and because it's easier to steal it than properly obtaining consent from the people whose content they are using. And to be clear, I don't give a shit if preventing AI from stealing copyrighted content kills large open-source AI tools. If the only way they can be useful is by committing mass infringement, then they don't deserve to exist. They can either use their own internally-developed datasets, datasets that only draw from the public domain, obtain the consent (which may or may not include royalties) from creators, or wither on the vine. That applies to both open-source and commercial AI technology.
Finally, I want to make it 100% clear that I have no issues with AI models that do not use copyrighted material in their training datasets. My employer introduced an AI chatbot trained entirely on our internal and public knowledgebases, and I'm perfectly fine with that morally/ethically/legally. (Personally, I think it's a little useless since the last time I used it the damn thing confidently gave me a false answer with fake links to nonexistent KB articles, but that's besides the point.) My entire issue with AI is centered around the unlicensed use of copyrighted material by AI models without the creator's consent, attribution, or compensation.
Tab completion is the main way I check that I'm using a valid file path in the command, especially when I'm deleting something. (and even then I double and triple check the path when I delete something lol)
Is comprehension necessary for breaking copyright infringement? Is it really about a creator being able to be logical or to extend concepts?
I think we have a definition problem with exactly what the issue is. This may be a little too philosophical but what part of you isn’t processing your historical experiences and generating derivative works? When I saw “dog” the thing that pops into your head is an amalgamation of your past experiences and visuals of dogs. Is the only difference between you and a computer the fact that you had experiences with non created works while the AI is explicitly fed created content?
That's part of it, yes, but nowhere near the whole issue.
I think someone else summarized my issue with AI elsewhere in this thread--AI as it currently stands is fundamentally plagiaristic, because it cannot be anything more than the average of its inputs, and cannot be greater than the sum of its inputs. If you ask ChatGPT to summarize the plot of The Matrix and write a brief analysis of the themes and its opinions, ChatGPT doesn't watch the movie, do its own analysis, and give you its own summary; instead, it will pull up the part of the database it was fed into by its learning model that relates to "The Matrix," "movie summaries," "movie analysis," find what parts of its training dataset matches up to the prompt--likely an article written by Roger Ebert, maybe some scholarly articles, maybe some metacritic reviews--and spit out a response that combines those parts together into something that sounds relatively coherent.
Another issue, in my opinion, is that ChatGPT can't take general concepts and extend them further. To go back to the movie summary example, if you asked a regular layperson human to analyze the themes in The Matrix, they would likely focus on the cool gun battles and neat special effects. If you had that same layperson attend a four-year college and receive a bachelor's in media studies, then asked them to do the exact same analysis of The Matrix, their answer would be drastically different, even if their entire degree did not discuss The Matrix even once. This is because that layperson is (or at least should be) capable of taking generalized concepts and applying them to specific scenarios--in other words, a layperson can take the media analysis concepts they learned while earning that four-year degree, and apply them to a specific thing, even if those concepts weren't explicitly applied to that thing. AI, as it currently stands, is incapable of this. As another example, let's say a brand-new computing language came out tomorrow that was entirely unrelated to any currently existing computing languages. AI would be nigh-useless at analyzing and helping produce new code for that language--even if it were dead simple to use and understand--until enough humans published code samples that could be fed into the AI's training model.
You realize LLMs are designed not to self improve by design right? It’s totally possible and has been tried - It’s just that they usually don’t end up very well once they do.
Tay is yet another example of AI lacking comprehension and intelligence; it produced racist and antisemitic content because it had no comprehension of ethics or morality, and so it just responded to the input given to it. It's a display of "intelligence" on the same level as a slime mold seeking out the biggest nearby source of food--the input Tay received was largely racist/antisemitic, so its output became racist/antisemitic.
And LLMs do learn new things, they’re just called new models. Because it takes time and resources to retrain LLMs with new information in mind. It’s up to the human guiding the AI to guide it towards something that isn’t copyright infringement.
And the way that humans do that is by not using copyrighted material for its training dataset. Using copyrighted material to produce an AI model is infringing on the rights of the people who created the material, the vast majority of whom are small-time authors and artists and open-source projects composed of individuals contributing their time and effort to said projects). Full stop.
Also, you say “right” and “probable” are without difference, yet once again bring something into the conversation which can only be “right”. Code. You cannot create code that is incorrect or it will not work. Text and creative works cannot be wrong. They can only be judged by opinions, not by rule books which say “it works” or “it doesn’t”.
Then why does ChatGPT invent Powershell cmdlets out of whole cloth that don't exist yet accomplish the exact precise task that the prompter asked it to do?
The last line is just a bit strange honestly. The biggest users of AI are creative minds, and it’s why it’s important that AI models remain open source so all creative minds can use them.
The biggest users of AI are techbros who think that spending half an hour crafting a prompt to get stable diffusion to spit out the right blend of artists' labor are anywhere near equivalent to the literal collective millions of man hours spent by artists honing their skill in order to produce the content that AI companies took without consent or attribution and ran through a woodchipper. Oh, and corporations trying to use AI to replace artists, writers, call center employees, tech support agents...
Frankly, I'm absolutely flabbergasted that the popular sentiment on Lemmy seems to be so heavily in favor of defending large corporations taking data produced en masse by individuals without even so much as the most cursory of attribution (to say nothing of consent or compensation) and using it for the companies' personal profit. It's no different morally or ethically than Meta hoovering all of our personal data and reselling it to advertisers.
Undertale was allowed to exist because none of the elements it took inspiration from were eligible for copyright protection. Everything that could have qualified for copyright protection--the dialogue, plot, graphical assets, music, source code--were either manually reproduced directly by Toby Fox and Temmie Chang, or used under permissive licenses that allowed reproduction (e.g. the GameMaker Studio engine). Meanwhile, the vast majority of content OpenAI used to feed its AI models were not produced by OpenAI directly, nor were they obtained under permissive license.
So... thanks for proving my point?
"right" and "probable" text are distinctions without difference. The simple fact is that an AI is incapable of handling anything outside its learning dataset. If you ask an AI to talk like a pirate, and it hasn't had any pirate speak fed to it by a human via its training dataset, it will utterly fail. If I ask an AI to produce a Powershell script, and it hasn't had code fed to it by a human via its training dataset, it will fail utterly. An AI cannot proactively buy a copy of Learn Powershell In a Month of Lunches and teach itself how to use Powershell. That fundamental shortcoming--the inability to self-improve, to proactively teach itself and apply that new knowledge to existing concepts--is a crucial, necessary element of transformative effort required to produce a derivative work (or fair use).
When that happens, maybe I'll buy that AI is anything more than the single biggest copyright infringement scheme the world has ever seen. Until then, though, I will wholeheartedly support the efforts of creative minds to defend their intellectual property rights against this act of blatant theft by tech companies profiting off their work.
Again, that's not comprehension, that's mixing in yet more data that was put into the model. If you ask an AI to do something that is outside of the dataset it was trained on, it will massively miss the mark. At best, it will produce something that is close to what you asked, but not quite right. It's why an AI model that could beat the world's best Go players was beaten by a simple strategy that even amateur Go players could catch and defeat--the AI never came across that strategy while it was training against itself, so it had no idea what was going on.
And fair use isn't the bulletproof defense you think it is. Countless fan games have been shut down over the decades, most of them far more transformative than my hypothetical example, such as AM2R. You bet your ass that if I tried to profit off of that hypothetical crossover roguelike, using sprites, models, and textures directly ripped from their respective games, it would be shut down immediately.
EDIT: I also want to address the assertion that AI isn't trained to recreate existing works; in my view, that's wholly irrelevant. If I made a program that took all the Harry Potter books, ran each word through a thesaurus, and sold it for profit, that would still be infringing, even if no meaningful words were identical to the original source material. Granted, if I curated the output and made a few of the more humorous excerpts available for free through a Mastodon or Lemmy post, that would likely qualify as fair use. However, that would be because a human mind is parsing the output and filtering out the 99% of meaningless gibberish that a thesaurus-ized Harry Potter would result in.
The only human input to an AI that gave consent to being part of its output is the miniscule input of the prompt given to it by the human, which does not meet the minimis effort required for copyright protection under law. The rest of the input--the countless terabytes of data scraped from the internet and fed into the AI's training model--was all taken without the author's consent, and their contribution vastly outweighs that of the prompt author and OpenAI's own transformative efforts via the LLM.
I mean, even a cursory search on Google shows that smart TVs can gather a hell of a lot more data than just that, up to and including analyzing the actual video being displayed to figure out what you're watching