Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ME
Посты
0
Комментарии
815
Joined
2 yr. ago

  • Ha yeah in the same way we're still using the same old pn semiconductor wafers from the 90s - it's basically the same thing which is why I still use my p120 and it's just as good as any of these modern machines with their fancy 7nm pathways!

    The batteries used today are much better than old batteries and the manufacturing technologies are far superior also, it depends on the device of course but energy density, charge speed, reliability has increased also manufacturing cost and requirements, low lithium batteries are getting more common for example.

    Plus it's getting increasingly likely that the lithium in your battery has already been a different battery previously thanks to new recycling methods so that's pretty cool.

  • Yeah, and a good sign is that the countries with money to invest in the race all seem to be convinced we've got the science right and that the engineering challenges are solvable. There have been so many records broken recently we're getting towards the end of the mile stones, hopefully soon we'll start hearing about self sustaining experiments with records for how long they ran

  • Ha ok, we'll see how that prediction pans out.

    Yes the expensive and complex products available today limit the audience which in turn lowers the attractiveness of the market to creators which further inhibits uptake, the exact same thing is clearly visible in the home computer adoption curve and many similar developments.

    First adopters create an ecosystem of markets which results in a growing diversity of established use cases - many ideas fail but some prove to be very efficient and effective as part of a workflow which over going becomes the standard way of doing things.

    As there are more things for which vr becomes established it transitions from being something major creators don't really bother with to something that they make a show of supporting - especially as the general ecosystem has become established so things like which menu style to use or how to orientate views have become easy choices. This changes vr from being niche special use to a fairly general tool that a lot of people are used to using.

    At that point we'll see a lot of cheap consumer devices which results in a lot more development on the market, especially as natural language input through LLMs make control interfaces easier and similar generative ai make creating vr environments easier.

    Vr is going to be something that most people are used to using somewhat regularly, I don't think it'll replace screens but there's a lot of things that we currently do on a screen that will just make more sense in vr

  • If you ask it to make up nonsense and it does it then you can't get angry lol. I normally use it to help analyse code or write sections of code, sometimes to teach me how certain functions or principles work - it's incredibly good at that, I do need to verify it's doing the right thing but I do that with my code too and I'm not always right either.

    As a research tool it's great at taking a basic dumb description and pointing me to the right things to look for, especially for things with a lot of technical terms and obscure areas.

    And yes they can occasionally make mistakes or invent things but if you ask properly and verify what you're told then it's pretty reliable, far more so than a lot of humans I know.

  • Why would I rebut that? I'm simply arguing that they don't need to be 'intelligent' to accurately determine the colour of the sky and that if you expect an intelligence to know the colour of the sky without ever seeing it then you're being absurd.

    The way the comment I responded to was written makes no sense to reality and I addressed that.

    Again as I said in other comments you're arguing that an LLM is not will smith in I Robot and or Scarlett Johansson playing the role of a usb stick but that's not what anyone sane is suggesting.

    A fork isn't great for eating soup, neither is a knife required but that doesn't mean they're not incredibly useful eating utensils.

    Try thinking of an LLM as a type of NLP or natural language processing tool which allows computers to use normal human text as input to perform a range of tasks. It's hugely useful and unlocks a vast amount of potential but it's not going to slap anyone for joking about it's wife.

  • But this is not true, everyone has been asking for better internet speed and natural language computing. 5G was required because everyone is online all the time, yeah people aren't hyped about it because it's boring but if we didn't have 5g and archaic infrastructure didn't scale with demand then you bet people would be yelling about it - when a train goes by there's a hundred people using mobile internet, likely none of them care about 5g but they love being able to work, chat and browse the internet on their journey.

    Ai is absurdly beneficial to people already and it's incredibly early days, again people aren't going to be especially hyped by most it's uses, in fact they won't notice most of them but it'll help fix a lot of things that really annoy or negatively affect them.

    As someone who has spent a lot of time learning about and designing GUIs I can tell you that designing a system to give all the different user sets and types the controls they need is super complex - as someone who actually programs them I can assure you implementing whatever system is created to do this is even more painfully difficult. Now imagine not having to do that, imagine I can make a tool and the user just has to say 'import this old file in an obscure format then do these obscure but relatively simple things...' this is huge from a development point of view and even huger from a user point of view.

    Ever have a family member ask you for the tenth time how to find their emails? Or hand you a device you've never seen before and say 'can you change the font size' and you have to go through menus and Google how to do it? Soon it'll be fairly standard to just tell things what you want and for them to actually understand.

    This is just one small benefit that LLMs and natural language computing bring, I could list other benefits for days

  • People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

    A lot of the newer implementations use more complex methods of fact verification, it's not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we're seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.

  • That's only true on a very basic level, I understand that Turings maths is complex and unintuitive even more so than calculus but it's a very established fact that relatively simple mathematical operations can have emergent properties when they interact to have far more complexity than initially expected.

    The same way the giraffe gets its spots the same way all the hardware of our brain is built, a strand of code is converted into physical structures that interact and result in more complex behaviours - the actual reality is just math, and that math is almost entirely just probability when you get down to it. We're all just next word guessing machines.

    We don't guess words like a Markov chain instead use a rather complex token system in our brain which then gets converted to words, LLMs do this too - that's how they can learn about a subject in one language then explain it in another.

    Calling an LLM predictive text is a fundamental misunderstanding of reality, it's somewhat true on a technical level but only when you understand that predicting the next word can be a hugely complex operation which is the fundamental math behind all human thought also.

    Plus they're not really just predicting one word ahead anymore, they do structured generation much like how image generators do - first they get the higher level principles to a valid state then propagate down into structure and form before making word and grammar choices. You can manually change values in the different layers and see the output change, exploring the latent space like this makes it clear that it's not simply guessing the next word but guessing the next word which will best fit into a required structure to express a desired point - I don't know how other people are coming up with sentences but that feels a lot like what I do

  • Ha ha yeah humans sure are great at not being convinced by the opinions of other people, that's why religion and politics are so simple and society is so sane and reasonable.

    Helen Keller would belive you it's purple.

    If humans didn't have eyes they wouldn't know the colour of the sky, if you give an ai a colour video feed of outside then it'll be able to tell you exactly what colour the sky is using a whole range of very accurate metrics.

  • I use LLMs to create things no human has likely ever said and it's great at it, for example

    'while juggling chainsaws atop a unicycle made of marshmallows, I pondered the existential implications of the colour blue on a pineapples dream of becoming a unicorn'

    When I ask it to do the same using neologisms the output is even better, one of the words was exquimodal which I then asked for it to invent an etymology and it came up with one that combined excuistus and modial to define it as something beyond traditional measures which fits perfectly into the sentence it created.

    You can't ask a parrot to invent words with meaning and use them in context, that's a step beyond repetition - of course it's not full dynamic self aware reasoning but it's certainly not being a parrot

  • But also the people who seem to think we need a magic soul to perform useful work is way way too high.

    The main problem is Idiots seem to have watched one too many movies about robots with souls and gotten confused between real life and fantasy - especially shitty journalists way out their depth.

    This big gotcha 'they don't live upto the hype' is 100% people who heard 'ai' and thought of bad Will Smith movies. LLMs absolutely live upto the actual sensible things people hoped and have exceeded those expectations, they're also incredibly good at a huge range of very useful tasks which have traditionally been considered as requiring intelligence but they're not magically able everything, of course they're not that's not how anyone actually involved in anything said they would work or expected them to work.

  • People love to say things like this but it's kinda ridiculous, pretty much every new tech is hugely successful. Those battery advances that no one really believes in? You've probably got one of them in your hand now, you're probably physically closer to someone using chatGPT than you are to someone reading a book - if not you almost certainly met more people today who have used gpt more recently than they've read from a book. Vr adoption continues to grow, automation solutions are getting installed all over the place at a rapid rate, electric cars are gaining market share, whole countries are using desalination for their water supply, everyone that's said anything about Osiris rex has been excited about the move towards space based industry.

    The bulk of the population is loving the endless tech upgrades and eager for more, yeah not everything is good and most people are adult enough to realise that.

    (No I did not read the article, someone said it was shit and I don't doubt them)