Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BB
Posts
4
Comments
287
Joined
2 yr. ago

  • We do not have a rigorous model of the brain, yet we have designed LLMs. Experts of decades in ML recognize that there is no intelligence happening here, because yes, we don't understand intelligence, certainly not enough to build one.

    If we want to take from definitions, here is Merriam Webster

    (1)

    : the ability to learn or understand or to deal with new or trying >situations : reason

    also : the skilled use of reason

    (2)

    : the ability to apply knowledge to manipulate one's >environment or to think abstractly as measured by objective >criteria (such as tests)

    The context stack is the closest thing we have to being able to retain and apply old info to newer context, the rest is in the name. Generative Pre-Trained language models, their given output is baked by a statiscial model finding similar text, also coined Stocastic parrots by some ML researchers, I find it to be a more fitting name. There's also no doubt of their potential (and already practiced) utility, but a long shot of being able to be considered a person by law.

  • I don't want to spam this link but seriously watch this 3blue1brown video on how text transformers work. You're right on that last part, but its a far fetch from an intelligence. Just a very intelligent use of statistical methods. But its precisely that reason that reason it can be "convinced", because parameters restraining its output have to be weighed into the model, so its just a statistic that will fail.

    Im not intending to downplay the significance of GPTs, but we need to baseline the hype around them before we can discuss where AI goes next, and what it can mean for people. Also far before we use it for any secure services, because we've already seen what can happen

  • Building my own training set is something I would certainly want to do eventually. Ive been messing with Mistral Instruct using GPT4ALL and its genuinely impressive how quick my 2060 can hallucinate relatively accurate information, but its also evident of limitations. IE I tell it I do not want to use AWS or another cloud hosting service, it will just return a list of suggested services not including AWS. Most certainly a limit of its training data but still impressive.

    Anyone suggesting to use LLMs to manage people or resources are better off flipping a coin on every thought, more than likely companies who are insistent on it will go belly up soon enough

  • The fallout of image generation will be even more incredible imo. Even if models do become even more capable, training off of post-'21 data will become increasingly polluted and difficult to distinguish as models improve their output, which inevitably leads to model collapse. At least until we have a standardized way of flagging generated images opposed to real ones, but I don't really like that future.

    Just on a tangent, openai claiming video models will help "AGI" understand the world around it is laughable to me. 3blue1brown released a very informative video on how text transformers work, and in principal all "AI" is at the moment is very clever statistics and lots of matrix multiplication. How our minds process and retain information is by far more complicated, as we don't fully understand ourselves yet and we are a grand leap away from ever emulating a true mind.

    All that to say is I can't wait for people to realize: oh hey that is just to try to replace talent in film production coming from silicon valley

  • Hey yes, add a person for each person you want to have access, only have yourself as admin that way they wont have direct access to entities.

    Now you can use either conditional dashboards, or conditional cards to control what utilities are available to users. For example, my dashboards home screen shows general controls for everyone, but conditionally shows light controls to each user on the same page. I then have myself an admin dashboard with server controls, etc.

    For the garage, that was an issue I sought to avoid, a simple automation that closes the garage if it has been opened for too long, and a toggle for that function as well as garage notifications seem to be plenty.

    Only one incident where my jerry-rigged Shelly relay remote had the sensor switch miss, and the auto close - thinking the garage was open - opened the garage. I added an error state so that it would attempt to close again and notify if the garage is open after autoclose, which works if its blocked as well. All of this because MyQ doesnt provide local or API access lol

    Edit: turns out I may be wrong about entity access, which is a bit of a shame, hopefully we can see that in a future update.

  • If you think the useless appliances are bad, just take a look at more critical connected devices.

    I needed some POE security cameras, found some foscam ones on the cheap. Plug them up, go to IP, "install our app"... was pleased to find it allowed a local account without the need for an email, but found that half of my network traffic was comprised of requests to their "ivyIOT AI detection". I didnt measure what data was going through before sectioning them behind a firewall zone.

    My fault for not having looked further into other brands, they were still a bargain and work without issue with my setup, but annoying

  • I let you in on a secret.. I actually think they're really neat, just really not useful for the applications most prevalent (news sites, etc. where you likely want to be online anyway)... but that's got me thinking a wiki PWA would be sweet to cache articles.. and that is all I can think of lol