Skip Navigation

User banner
Posts
2
Comments
125
Joined
2 yr. ago

  • Firstly, I'm willing to bet only a minority of users regularly use those buttons. Secondly, you're talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?

  • I know LLMs are used to grade LLMs. That isn't solving the problem, it's just better than nothing because there are no alternatives. There aren't enough humans willing to endlessly sit and grade LLM responses.

  • We don't know what we don't know. Maybe 5 minutes is all it takes to understand the essence of a problem. Maybe several lifetimes. There are examples of people who have studied something for a long time yet have come to more incorrect conclusions than someone who reads a single paper on the subject might. (There are physicists who believe consciousness is "real" but "unphysical", biologists who think life must has been created and nurtured by a god, and healthcare specialists who think vaccines are bad.)

    That doesn't justify being arrogant and naive or dismissive of people more knowledgeable in a subject matter, but it enables someone to decide that a person they're arguing with is one such example because "the truth is bloody obvious".

    It's painful to read people's takes on things you know something about. At the same time, most of us do the exact same thing whenever we share our take on something we don't know as much about because we think we don't need to.

  • For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.

    The reason prompt engineering is a thing is that people know what is expected and desired output and what isn't, and can adapt their interactions with the tool accordingly, a trait uniquely associated with adaptive complex systems.

  • Could just as well have gone the other way though. Sassy CM telling some loud, annoying, entitled brat to git gud or cry more? Instant cool-dev meme. But if a lot of people feel similarly you get outrage and controversy. Just depends on the local culture on that particular day in that particular place.

    It's cool to be rude as long as you also feel that it's warranted. It's cool to offend people you don't like or deride ideas you think are stupid. Everyone isMost people are always just one wrong audience away from being a horrible person.

    Of course CM or PR staff have different expectations, but I can understand why they might make a gamble sometimes trying to be cool and causual.

  • If Linux gaming continues to increase in popularity, I imagine the anti-cheat will start to crawl its way out of the WINE environment and into the native system. But I actually have no clue about how these AC work or is handled by WINE.

  • Like a completely mad or autistic artist that is creating interesting imagery but has no clue what it means.

    Autists usually have no trouble understanding the world around them. Many are just unable to interface with it the way people normally do.

    It’s a reflection of our society in a weird mirror.

    Well yes, it's trained on human output. Cultural biases and shortcomings in our species will be reflected in what such an AI spits out.

    When you sit there thinking up or refining prompts you’re basically outsourcing the imaginative visualizing part of your brain. [...] So AI generation is at least some portion of the artistic or creative process but not all of it.

    We use a lot of devices in our daily lives, whether for creative purposes or practical. Every such device is an extension of ourselves; some supplement our intellectual shortcomings, others physical. That doesn't make the devices capable of doing any of the things we do. We just don't attribute actions or agency to our tools the way we do to living things. Current AI possess no more agency than a keyboard does, and since we don't consider our keyboards to be capable of authoring an essay, I don't think one can reasonably say that current AI is, either.

    A keyboard doesn't understand the content of our essay, it's just there to translate physical action into digital signals representing keypresses; likewise, an LLM doesn't understand the content of our essay, it's just translating a small body of text into a statistically related (often larger) body of text. An LLM can't create a story any more than our keyboard can create characters on a screen.

    Only once/if ever we observe AI behaviour indicative of agency can we start to use words like "creative" in describing its behaviour. For now (and I suspect for quite some time into the future), all we have is sophisticated statistical random content generators.

  • Yeah a real problem here is how you get an AI which doesn't understand what it is doing to create something complete and still coherent. These clips are cool and all, and so are the tiny essays put out by LLMs, but what you see is literally all you are getting; there are no thoughts, ideas or abstract concepts underlying any of it. There is no meaning or narrative to be found which connects one scene or paragraph to another. It's a puzzle laid out by an idiot following generic instructions.

    That which created the woman walking down that street doesn't know what either of those things are, and so it can simply not use those concepts to create a coherent narrative. That job still falls onto the human instructing the AI, and nothing suggests that we are anywhere close to replacing that human glue.

    Current AI can not conceptualise -- much less realise -- ideas, and so they can not be creative or create art by any sensible definition. That isn't to say that what is produced using AI can't be posed as, mistaken for, or used to make art. I'd like to see more of that last part and less of the former two, personally.

  • I didn't know about this game. I love pirate stuff. The boats and aesthetics of that era, the natural environments of the Caribbean, the relevant sociopolitical developments at the time, and of course the stories and mythologies.. but Skull and Bones fails to interest me even the slightest bit.

    It appears to be an arcade game where you just press keys to move your ship around, shoot at things until their health bar depletes, and go around playing minigames to collect loot/resources. I don't know anything about the story content but I'm willing to bet there's at best some passably written character arc but nothing resembling a deep commentary on the relevant issues of that time (nor our time).

    I'm almost laughably far from being a representative of the average gamer but the number of 'A's assigned to titles (so far) hasn't been indicative of quality as I perceive it. Budget and effort is mostly orthogonal to the artistic and creative value of a work.

  • No, different apps this time.

    Edit: Oh I see, you meant that each app needs to be manually updated once first

  • Doesn't seem to be working for me. I just saw that there were a bunch of stalled notifications (19 hours old, stalled as in stuck at downloading/ready to install) and when I go into the app it's just the same old offer to download and then after that I get the option to install each one separately.

  • I updated to 1.19 and have two app updates listed as available. They are not updated automatically and there is no F-Droid setting for background updates that I can find. In order to install the two aforementioned updates I am required to first download them and then, for each one, I have to press install and then confirm on a popup.

    To be fair, those updates were available before I updated F-Droid, so whatever mechanism that is supposed to be triggered may not have been because the updates were not new?

    Nevertheless I am excited about the prospect, because updating my apps have been such a pain that I constantly procrastinate dealing with it. Sitting with the phone in front of me, clicking a few times, waiting, clicking a few times, waiting, then repeat... never leaving the app and making sure it doesn't fall asleep.. it is not a fun activity.

  • It's not so much the hardware as it is the software and utilisation, and by software I don't necessarily mean any specific algorithm, because I know they give much thought to optimisation strategies when it comes to implementation and design of machine learning architectures. What I mean by software is the full stack considered as a whole, and by utilisation I mean the way services advertise and make use of ill-suited architectures.

    The full stack consists of general purpose computing devices with an unreasonable number of layers of abstraction between the hardware and the languages used in implementations of machine learning. A lot of this stuff is written in Python! While algorithmic complexity is naturally a major factor, how it is compiled and executed matters a lot, too.

    Once AI implementations stabilise, the theoretically most energy efficient way to run it would be on custom hardware made to only run that code, and that code would be written in the lowest possible level of abstraction. The closer we get to the metal (or the closer the metal gets to our program), the more efficient we can make it go. I don't think we take bespoke hardware seriously enough; we're stuck in this mindset of everything being general-purpose.

    As for utilisation: LLMs are not fit or even capable of dealing with logical problems or anything involving reasoning based on knowledge; they can't even reliably regurgitate knowledge. Yet, as far as I can tell, this constitutes a significant portion of its current use.

    If the usage of LLMs was reserved for solving linguistic problems, then we wouldn't be wasting so much energy generating text and expecting it to contain wisdom. A language model should serve as a surface layer -- an interface -- on top of bespoke tools, including other domain-specific types of models. I know we're seeing this idea being iterated on, but I don't see this being pushed nearly enough.[^1]

    When it comes to image generation models, I think it's wrong to focus on generating derivative art/remixes of existing works instead of on tools to help artists express themselves. All these image generation sites we have now consume so much power just so that artistically wanting people can generate 20 versions (give or take an order of magnitude) of the same generic thing. I would like to see AI technology made specifically for integration into professional workflows and tools, enabling creative people to enhance and iterate on their work through specific instructions.[^2] The AI we have now are made for people who can't tell (or don't care about) the difference between remixing and creating and just want to tell the computer to make something nice so they can use it to sell their products.

    The end result in all these cases is that fewer people can live off of being creative and/or knowledgeable while energy consumption spikes as computers generate shitty substitutes. After all, capitalism is all about efficient allocation of resources. Just so happens that quality (of life; art; anything) is inefficient and exploiting the planet is cheap.

    [1]: For example, **why does OpenAI gate external tool integration behind a payment plan while offering simple text generation for free?** That just encourages people to rely on text generation for all kinds of tasks it's not suitable for. Other examples include companies offering AI "assistants" or even AI "teachers"(!), all of which are incapable of even remembering the topic being discussed 2 minutes into a conversation. [2]: I get incredibly frustrated when I try to use image generation tools because I go into it with a vision, but since the models are incapable of creating anything new based on actual concepts I only ever end up with something incredibly artistically compromised and derivative. I can generate hundreds of images based on various contortions of the same prompt, reference image, masking, etc and still not get what I want. THAT is inefficient use of resources, and it's all because the tools are just not made to help me do art.

  • It’s not like corporations are some animal who can’t help but be who they are.

    That's exactly what they are. They are composed of people only to the extent that a car is composed of wheels.

    If it's otherwise in working order, a flat tire will be replaced and the car will be going wherever it's meant to go. Profit city is where all roads lead to, and a flat tire (or four) can only delay for so long.

    If you want to hold corporations to moral standards, you have to change the incentives (destinations) and restructure corporations to be actually owned and controlled by people who are then held to those moral standards (put more of the car into the wheels).

  • I think of it as a problem of "attention dysregulation". At least that feels like a closer description, since attention is a very central component in many of the difficulties we experience - it just can't be reduced to a "deficit" (whatever that could even mean).

    You probably know this already, but I like to (re)phrase existing knowledge in several ways even if just for myself, because one can know something in more than one way: Attention regulation is how a brain prioritises, filters, and emphasises information about the external world, and I believe it also plays a big (and interesting) part in executive function

    I understand the general concept of 'attention' as an allocation/distribution mechanism of cognitive resources, so calling it "deficient" feels a bit like category error. It's like reducing the challenges faced by a governing body responsible for mismanaging an economy to an "economy deficit problem". Just doesn't make much sense, even if the end result looks like a deficit in resources (analogous to focus) (in some areas).

  • Is this going to be available for free? And if so, to what extent? I'm not paying for AI, but would be cool to try it out.

    I've also been burnt a few times by registering for some "free" AI service only to realise after putting in some actual effort into trying to create something that literally any actual value you might extract from it is gated behind a payment plan. This was the case when I tried generating voices, for example: spend an hour crafting something I like; generating any actual audio with it? Pay up. It's like trying out a free MMO where you spend a long time creating your character just the way you want it only to be greeted by "trial over - subscribe now!"

  • True, I could have identified those as suggested solutions (albeit rather broad and unspecific, which is perfectly fine). I also sympathise on both accounts.

    I have this personal intuition that a lot of social friction could be mitigated if we took some inspiration from the principle of locality physics when designing social networks and structuring society in general. The idea of locality in physics is that physical systems interact only with their adjacent neighbours. The analogous social principle I have in mind is that interactions between people that understand and respect each other should be facilitated and emphasised, and (direct) interactions between people far apart from each other on (some notion of) a "compatibility spectrum" should be limited and de-emphasised. The idea here is that this would enable political and cultural ideas to be propagated and shared with proportionate friction, resulting in a gradual dissipation of truly incompatible views and norms, which would hopefully reduce polarisation.

    The way it works today is that people are constantly exposed directly to strangers' unpalatable ideas and cultures, and there is zero reason for someone to seriously consider any of that since no trust or understanding exists between the (often largely unconsenting) audience and the (often loud) proponents. If some sentiment was instead communicated to a person after having passed through a series of increasingly trusted people (and after likely having undergone some revisions and filtering), that would make the person more likely to consider and extract value from it, and that would bring them a little bit closer to the opposite end of that chain.

    Anyway, those are my musings on this matter.

  • We don't have to prove that the brain isn't puppeted from some external realm of "consciousness" in order to say we can be quite confident that it isn't, because positing that there is such a thing as free will in the traditional notion of the term is magical thinking, which most of us might agree isn't particularly respectable.

    What we can do is take a compatibilist approach and say there is something that is "effectively indeterministic" about human decision making, because we can't ever ourselves predict our own actions any faster than we observe them. I don't have any moral contribution to make here; I just wanted to add this reflection.

  • I don't see em suggesting any particular solutions, so I'm not sure what you are criticizing or why you think it would result in Elon remaining at large any more than from figurative fruit throwing.

    I agree that social repercussions have a place, but I also agree that it is only "good enough" for many -- but not all -- situations. Seeking a more sophisticated approach based on studying and identifying potential root causes seems to me like it would be more sustainable, not to mention an opportunity for individual growth.

  • One of the last things I remember is Oberyn getting his mind blown