Calculate how much more you're spending on groceries
Peanut @ Peanutbjelly @sopuli.xyz Posts 1Comments 172Joined 2 yr. ago
then what the fuck are you even arguing? i never said "we should do NO regulation!" my criticism was against blaming A.I. for things that aren't problems created by A.I.
i said "you have given no argument against A.I. currently that doesn’t boil down to “the actual problem is unsolvable, so get rid of all automation and technology!” when addressed."
because you haven't made a cohesive point towards anything i've specifically said this entire fucking time.
are you just instigating debate for... a completely unrelated thing to anything i said in the first place? you just wanted to be argumentative and pissy?
i was addressing the general anti-A.I. stance that is heavily pushed in media right now, which is generally unfounded and unreasonable.
I.E. addressing op's article with "Existing datasets still exist. The bigger focus is in crossing modalities and refining content." i'm saying there is a lot of UNREASONABLE flak towards A.I. you freaked out at that? who's the one with no nuance?
your entire response structure is just.. for the sake of creating your own argument instead of actually addressing my main concern of unreasonable bias and push against the general concept of A.I. as a whole.
i'm not continuing with you because you are just making your own argument and being aggressive.
I never said "we can't have any regulation"
i even specifically said " i have advocated for particular A.I. tools to get much more regulation for over 5-10 years. how long have you been addressing the issue?"
jesus christ you are just an angry accusatory ball of sloppy opinions.
maybe try a conversation next time instead of aggressively wasting people's time.
it's like you just ignored my main points.
get rid of the A.I. = the problem is still the problem. has been especially for the past 50 years, any non-A.I. advancement continues the trend in the exact same way. you solved nothing.
get rid of the actual problem = you did it! now all of technology is a good thing instead of a bad thing.
false information? already a problem without A.I. always has been. media control, paid propagandists etc. if anything, A.I. might encourage the main population to learn what critical thought is. it's still just as bad if you get rid of A.I.
" CLAIMING you care about it, only to complain every single time any regulation or way to control this is proposed, because you either don’t actually care and are just saying it for rhetoric" think this is called a strawman. i have advocated for particular A.I. tools to get much more regulation for over 5-10 years. how long have you been addressing the issue?
you have given no argument against A.I. currently that doesn't boil down to "the actual problem is unsolvable, so get rid of all automation and technology!" when addressed.
which again, solves nothing, and doesn't improve anything.
should i tie your opinions to the actual result of your actions?
say you succeed. A.I. is gone. nothing has changed. inequality is still getting worse and everything is terrible. congratulations! you managed to prevent countless scientific discoveries that could help countless people. congrats, the blind and deaf lose their potential assistants. the physically challenged lose potential house-helpers. etc.
on top of that, we lose the biggest argument for socializing the economy going forward, through massive automation that can't be ignored or denied while we demand a fair economy.
for some reason i expect i'm wasting my time trying to convince you, as your argument seems more emotionally motivated than rationalized.
Are we talking about data science??
There needs to be strict regulation on models used specifically for user manipulation and advertising. Through statistics, these guys know more about you than you do. That's why it feels like they are listening in.
Can we have more focus and education around data analysis and public influence? Right now the majority of people don't even know there is a battle of knowledge and influence that they are losing.
they are different things. it's not exclusively large companies working on and understanding the technology. there's a fantastic open-source community, and a lot of users of their creations.
would destroying the open-source community help prevent the big-tech from taking over? that battle has already been lost and needs correction. crying about the evil of A.I. doesn't actually solve anything. "proper" regulation is also relative. we need entirely new paradigms of understanding things like "I.P." which aren't based on a century of lobbying from companies like disney. etc.
and yes, understanding how something works is important for actually understanding the effects, when a lot of tosh is spewed from media sites that only care to say what gets people to engage.
i'd say a fraction of what i see as vaguely directed anger towards anything A.I. is actually relegated to areas that are actual severe and important breaches of public trust and safety, and i think the advertising industry should be the absolute focal point on the danger of A.I.
Are you also arguing against every other technology that has had their benefits hoarded by the rich?
Funny I don't see much talk in this thread about Francois Chollet's abstraction and reasoning corpus, which is emphasised in the article. It's a really neat take on how to understand the ability of thought.
A couple things that stick out to me about gpt4 and the like are the lack of understanding in the realms that require multimodal interpretations, the inability to break down word and letter relationships due to tokenization, lack of true emotional ability, and similarity to the "leap before you look" aspect of our own subconscious ability to pull words out of our own ass. Imagine if you could only say the first thing that comes to mind without ever thinking or correcting before letting the words out.
I'm curious about what things will look like after solving those first couple problems, but there's even more to figure out after that.
Going by recent work I enjoy from Earl K. Miller, we seem to have oscillatory cycles of thought which are directed by wavelengths in a higher dimensional representational space. This might explain how we predict and react, as well as hold a thought to bridge certain concepts together.
I wonder if this aspect could be properly reconstructed in a model, or from functions built around concepts like the "tree of thought" paper.
It's really interesting comparing organic and artificial methods and abilities to process or create information.
surprise? it was a coin toss when they started devil may cry 5. glad to see it's coming now though. the original was fantastic due to how creative it was. they cared a lot about mechanics being natural.
most systems were very unique in implementation, like how big or small you were affected many different little things. dragon-forging mechanic was neat. there were lovely details in fighting. moments like, setting yourself on fire and jumping onto a griffin as it takes off. it catches fire while you stab it in the face, crashing to the ground. your mage finishes their tornado and then you can't see anything for a while, but the griffin dies anyway. great times.
this is a difficult one.
for people (as well as myself) to understand nuance and the complicated nature of communication and interaction. our brains are good at filling in gaps of information, which is difficult for us to perceive. there is a complexity and sparsity of interpretations and perspective which we are largely incapable of realizing. this is largely due to the excess of knowledge and experiences in the world, which can be combined or perceived in countless different ways. we are especially ignorant to what we are ignorant of.
this means we exist in a high-dimensional battlefield ball of misunderstanding, misinterpretation, and unintended inability to convey what was intended.
when we say something to someone, we expect they understand what we mean, but often their interpretations of the words you use can vary highly in ways you could not have predicted from your perspective. as well you may fail to realize the existence of several things that the other party understands or believes, which influences their perspective on countless possible things that have influenced their interpretation of your words in a way that you can't understand, and wouldn't know to discover.
at the same time many people are more susceptible to statistically ensured trend setting. this is mostly popular with bad actors who don't mind saying whatever they know will "work" instead of trying to convince people of what is true or reasonable.
TLDR: we are more confident than we should be for almost everything. we also suck at communicating for reasons that are too complex to fully see or interpret. be patient and reasonable, as we are all missing information. a good mediator helps find gaps in perspective. try not to be controlled by your emotion or instinctual reactions to situations. be critical when interpreting new information.
i would note that nothing is without nuance. while nowhere near comparable, there are some liberals that are also new-age hippies. (B.C. canada) that aren't 100% on their fact checking.
but that's unavoidable in any group that is large and diverse enough.
i think it's a cultural mentality that discourages critical thinking which leads to most conservative ideology to begin with.
i laughed pretty hard when south park did their chatgpt episode. they captured the school response accurately with the shaman doing whatever he wanted, in order to find content "created by AI."
again, the issue isn't the technology, but the system that forces every technological development into functioning "in the name of increased profits for a tiny few."
that has been an issue for the fifty years prior to LLMs, and will continue to be the main issue after.
removing LLMs or other AI will not fix the issue. why is it constantly framed as if it would?
we should be demanding the system adjust for the productivity increases we've already seen, as well to what we expect in the near future. the system should make every advancement a boon for the general populace, not the obscenely wealthy few.
even the fears of propaganda. the wealthy can already afford to manipulate public discourse beyond the general public's ability to keep up. the bigger issue is in plain sight, but is still being largely ignored for the slant that "AI is the problem."
The wording of every single article has such an anti AI slant, and I feel the propaganda really working this past half year. Still nobody cares about advertising companies, but LLMs are the devil.
Existing datasets still exist. The bigger focus is in crossing modalities and refining content.
Why is the negative focus always on the tech and not the political system that actually makes it a possible negative for people?
I swear, most of the people with heavy opinions don't even know half of how the machines work or what they are doing.
some subreddits were basically bots posting new topical research papers, which i appreciated.
Exactly what I keep saying when people start blaming the tools being used for automation. Productivity is up and up and up, but none of that has been given back to the workers in the past fifty years. If I try to find dialogue on that issue, I run into a mountain of blatant propaganda defending the continued robbery of the middle and lower classes.
TY. i need to stop commenting with phone swipe keyboard.
We already know we aren't allowed to use someone's likeness without permission. The issue is companies like Disney who will end up legally owning all of the likenesses. Especially if we continue to beef up copyright, they will end up owning likeness to all artistic styles. Grimes did it right with the voice tech, but even that doesn't fix the real issue.
We need to fix the system we live in that is so terrible that it makes amazing new technology seem like a negative to the larger populace. We could destroy the loom to keep people employed, but that doesn't actually help anyone. It's no coincidence that we have record profits at the same time as unreasonable price hikes. That people are overworked and struggling after fifty years of unimaginable productivity growth.
There's a mountain of propaganda defending the rich as well. If I try to search for views critical of the ones that plundered the entire world, I get bombarded with excuses and defenses for indefensible behaviors. Why are people freaking out about the tech reaching Utopian levels when the real issue is keeping the thieves from stealing every gain we have as a society?
No note of who is specifically responsible? Politician or company? Even decades down the line, there should be repercussions for such avoidable tragedies. People shouldn't get away with such obviously terrible acts and environmental destruction when it's just normal people who suffer repercussions and pay for it. Somebody walks away with a lot of money whenever they decide to destroy the environment instead of safely dispose of dangerous waste. I don't see any aspect of that addressed in the article.
i get that. i think there's a highly abstracted, or high-dimensional complication to discussing social issues (or anything else) which has never been properly addressed. people are bad at visualizing the scale of variation of experiences and interpretations that exist. even communicating basic information is difficult, with vastly varying interpretation of words/phrases, to differences in local social ecosystems/experienced environments. it is enough to make properly conveying or interpreting information increasingly difficult with the scale and diversity of environments that exist. now that we're all connected, it's a lot all at once.
i often end up overly defensive as well due to a history of rather aggressive dismissal or denial of some notable traumas in my life. to the degree of being harassed and insulted in a way i think most would have difficulty not internalizing. my main issue is that people generally suck at communicating and understanding each-other, and the fact that we can't even communicate about that without being polarized and shut down.
i think fixing that would end better for everyone, regardless of personal history.
That's largely what these specialists are talking about. People emphasising the existential apocalypse scenarios when there are more pressing matters. I think purpose of the tools in mind should be more of a concern than the training data as well in many cases. People keep freaking out about LLMs and art models while still ignoring the plague of models built specifically to manipulate and predict subconscious habits and activities of individuals. Models built specifically to recreate the concept of a unique individual and their likeness for financial reason should also be regulated in new unique ways. People shouldn't be able to be bought wholesale, but to sell their likeness as a subscription with rights to withdraw from future production, etc.
I think the ways we think about a lot of things have to change based around the type of society we want. I vote getting away from a system that lets a few own everything until people no longer have the right to live.
i mean, my whole concern wouldn't be a concern if responses were like oneofthemladygoats even half of the time the issue was broached. this is the legitimate most accepting or positive response i've had to my concerns that i've ever experienced. online or in person.
i would say i've not stated any opinions that should even be controversial. asking purely for the recognition of bad actors and the harm they might bring, rather than refusal to address or accept anything that can imply that bad actors even can exist around the community.
my issue has never been people disagreeing with my points on the topic, but the adamant refusal to even recognize that certain situations exist, and likely contribute to the personal experience of people who end up feeling hopeless or angry. i think this feeds into situations like op's article when it occurs to less reasonable or more violent individuals. maybe if someone recognized their issues and tried to understand their perspective, they could have been derailed from whatever echo-chamber they may have been trapped in. this would also help combat the intentionally polarizing articles made to ensure cohesion is never found, because anger gets more clicks. remembering that experience and personal truths exist on different societal and cultural scales. we're interacting from a very messy starting point.
perhaps we are just going in circles, but i hope to see more positive change in the future. i do think dialogue in some corners have become less aggressive, and this thread has been a good example. if some of the more emotionally unstable people have the ability to communicate their grievances and understand the perspectives they are unaligned with, maybe we can avoid more of these trends. recognizing and stopping the support for bad actors in the community might just help.
i'll meditate on this, but i implore the same in your direction.
We still not factoring in pure profit vs actual overhead on big chain price increases?
Not even talking about it?
Ok.