Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EL
Posts
0
Comments
93
Joined
2 yr. ago

  • Which is interesting in itself, what if AI by chance produces a likeness of you, unintentionally. Is there an AI that has a database of all of us to know that? I'm sure they're trying, for whatever reason.

    Now, if you're someone famous, like a pop star or president, chances are there are a lot more images of you in those databases, which could also skew the resulting images.

    So I guess, what we really need is some way to trust the image, otherwise ... I really don't know how this can be avoided, maybe a smarter entity does.

  • What an interesting and extremely limiting perspective. I'm not a in the writing industry, I'll do my best.

    Two writers write the same article, beginning at the same time. One uses a quill, ink and blotting paper. The other, vi. Is the one using a PC less of a writer for using the tool to make it easier?

    The pc crashes so they end up finishing the article at the same time anyway.

    The handwritten one is then delivered to an editing teams intray. The pc one, is thrown to an AI for preliminary editing, and instantly the writer checks/corrects edits. Repeat as necessary and submit for approval.

    There is room for AI in writing, but I think writing by only humans will be more diverse and therefore enjoyable.

    e: somehow dragged a line to the wrong place.. Need an editor :p

  • I think both are true, it really depends on the business, and the mentality of the exec. It is extremely difficult to get software approved in my environment if it doesn't come with some kind of vendor support.

    Basically they want assurance that if something breaks, they can get someone to fix it if necessary.

    Personally, I don't think this is the best approach. Vendor support is often underwhelming, and it is not forever. The longer you want it, the more it will cost you to keep it. By the time they cash out, you're so invested the cost to change is prohibitive.

    My biggest gripe with closed source software, is the pissweak amount of peer review it gets, and it shows repeatedly. It's disturbing that we use things as important as operating systems and security products that only get scrutinised by a small number of people. People who probably all have similar methodologies and tools at their disposal. So, you forever see CVEs because they miss simple things. We've actually had a vendor (who we spend millions on yearly) tell us they wouldn't fix a 9.9 because they were planning to discontinue the product, and sign a nda.

    I would love to convince my org to refit to oss, but it would be an enormous investment just to transition, and honestly.. With the stuff we're seeing on the horizon of tech, I'm expecting some wild shifts in the way we do things in a similar 10 year timeline. It's been nice working with x86 since 8086, but it's time.

  • You're not wrong in any way, they definitely do need more funding. No matter how much you throw at it, the enormity of the task of making sure everything that wants to come to market is safe for humans.. I can't imagine how humans can even keep up.

    Sure, the risks associated with brain implants are high, but it's something you (hopefully) very consciously have to agree to. It's more value to test some artificial sweetener to make sure it doesn't give us diabetes.

  • Let's not forget water.. And eventually, oxygen.. But keep buying/selling those trinkets people, for the economy.

    And well, how much of these resource estimates leave enough for other life too, or does all other life just exist to feed us?..

  • I think that there are a lot of 8 billion people who would disagree with comfortably well. That number needs to be closer to two, to be sustainable with earth's resources. At least that's my understanding, not disappointed if wrong.

  • It really depends on what kind of applications you're talking about. There are still a number of things it can't run (or well, probably without a lot of meddling around to get there) in the professional space, like CAD. Hopefully this will change over time.

    For a lot of these products there are free alternatives available, but they often don't cut the mustard and/or aren't worth retraining for.

    Another thing you should consider before choosing Linux is hardware support. This is often lacking in Linux. For example, your fancy tablet might work fine as a tablet, but if you want to configure anything about it you might need windows depending on the device.

    The good news is, you can try it without worrying about harming your windows install by doing it say on a usb stick or hdd. It'll only cost you time and effort.

  • Community support is a thing, it's not the lack of support that's to blame here - have you ever used Microsoft support? Linux support is much more accessible even.

    A lot of the blame here, is Microsoft's clever marketing campaign providing windows to educational institutions - with support - for far below cost, in the early days when pc adoption was on the rise.

    Distribution saturation is a barrier to entry and focused support, and it is sometimes more complicated to install and repair. Sometimes it's easier to repair, because windows is too busy trying to hide its internals from you.

    It's usually easier to support a remote IT-illiterate person using Linux, by comparison to windows, today.

    e: I guess to be fair, if you factored in community support for windows, your options open up quite a lot. I was more thinking about my own interactions with their support. But enterprise support/problems are not the same as personal ones.

  • Let's not argue about the potential of "any human-machine interface", because nobody knows how far that can go. We have an idea, but there's still way too much we don't understand.

    You're right, humans never have and never will alone. It's a long shot, and as I said is pretty unlikely because the models will just get better at compensating. But I imagine if people were interacting with llms regularly - vocally - they would soon get tired of extended conversations to get what they want, and repeat training in forming those questions to an llm would maybe in turn reflect in their human interactions.

  • I'm going to take the time to illustrate here, how I can see LLMs affecting human speech through existing applications and technologies that are (or could) be made both available and popular enough to achieve this. We're far enough down the comment chain I can reply to myself now right?

    So, we can all agree that people are increasingly using LLMs in the form of chatgpt and the like, to acquire knowledge/information. The same way as they would use a search engine to follow a link to that knowledge.

    Speech-to-text has been a thing for at least 3 decades (yeah it was pretty hopeless once, but not so much now). So let's not argue about speech vs text. People already talk to Google and siri and whoever else to this end, llms. Pale have their responses read out via tts.

    I remember being blown away watching a blind sysadmin interacting with a Linux shell via tts at rates I couldn't even understand the words in 1998. How far we've come. I digress, so.

    We've all experienced trouble getting the information we're looking for even with all these tools. Because there's so much information, and it can be very difficult to find the needle in the haystack. So we constantly have to refine our queries either to be more specific, or exclude relationships to other information.

    This in turn, causes us to think about the words we were using to get the results we want, more frequently because otherwise we spend too much time on recursion.

    In turn, the more we do this, and are trained to do this, the more it will bleed into human communication.

    Now look, there is absolutely a lot of hopium smoking going on here, but damn, this could have everlasting impact on verbal communication. If technology can train people - through inaccurate/incorrect results to think about the communication going out when they speak, we could drastically reduce the amount of miscommunication between people by that alone.

    Imagine:

    get me a chair

    wheels out an office chair from the study

    no I meant a chair for at the kitchen table

    Vs

    get me a chair for at the kitchen table

    You can apply the same thing to human prompted image generation and video generation.

    Now.. We don't need llms to do this, or know this. But we are never going to achieve this without a third party - the "llm", and whatever it's plugged into - because the human recipient will usually be more capable of translating these variances, or employ other contexts not as accessible via a single output as speech or text.

    But if machines train us to communicate out better (more accurately, precisely and/or concisely), that is an effect I can't welcome enough.

    Realistically, the machines will learn to deal with us being dumb, before we adapt.

    e: formatting.