Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)RE
Posts
0
Comments
304
Joined
2 yr. ago

  • But why couldn't an AI do the same?

    Why are you assuming it can never get good enough to correctly figure out the intent and find the best possible response it is capable of?

    Sure, it's not there today, but this doesn't seem like some insurmountable challenge.

  • I also just noticed in the article:

    TikTok urged its users to protest the bill, sending a notification that said, "Congress is planning a total ban of TikTok... Let Congress know what TikTok means to you and tell them to vote NO."

    Also from a BBC article about the same thing:

    Earlier, users of the app had received a notification urging them to act to "stop a TikTok shutdown."

    So they were literally sending out misleading notifications (because a forced sale is not a total ban), and then the users wrote to Congress based on that...

    The probability that they will sell seems really high to me, as the same thing almost happened back in 2020.

  • Yes there are, in addition to the thumbs up/down buttons that most people don't use, you can also score based on metrics like "did the person try to rephrase the same question again?" (indication of a bad response), etc. from data gathered during actual use (which ChatGPT does use for training).

  • Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.

    That is something that can be learned. Follow-up questions can be asked to clarify (or even doubts - "are you sure you don't mean Y instead?"). Etc. Not that complicated.

    (Could be why OpenAI chooses to degrade the experience so much when you disable chat history and training in ChatGPT 😀)

    Today's LLMs have other quirks, like adding certain words can help even if they don't change the meaning that much, etc., but that's not some magic either.

  • It's not dead, and it's not going anywhere as long as LLM's exist.

    Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result. (which right now sometimes requires weird phrases, etc.)

    It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely "human" about writing prompts.

    Future systems could for example start asking questions more often, to clarify your intent better, and then use that as an input to the next stage of tweaking the prompt.

  • "Getting to a place" being a barrier may be a bit of a stretch (unless it's like really far and interferes with your work, etc.), but actually deciding to do therapy, what kind, finding a good therapist, and setting up the first appointment - that can be quite a massive barrier.

  • You don't need a facebook account a meta account was available as an alternative. That's great right? Much better!!!

    Actually yes. The problem with needing a Facebook account was that it was part of an unrelated service (social network, messenger, etc.) that you couldn't separate. Meta accounts are separate accounts for VR only, much like the previous Oculus accounts.

  • For these kind of generic questions, ChatGPT is great at giving you the common fluff you'd find in a random "10 ways to improve your career" youtube video.

    Which may still be useful advice, but you can probably already guess what it's going to say before hitting enter.

  • As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.

    Were there any cases of an already state of the art model using this method to improve itself?

  • I can kind of see his point, but the things he is suggesting instead (biology, chemistry, finance) don't make sense for several reasons.

    Besides the obvious "why couldn't AI just replace those people too" (even though it may take an extra few years), there is also a question of how many people can actually have a deep enough expertise to make meaningful contributions there - if we're talking about a massive increase of the amount of people going into those fields.

  • I would rather have better E2EE

    and

    I want my chats to be available on all devices even if I drop my phone into a volcano

    are kinda conflicting goals. If the chats are easily available on a new device without you manually syncing the key, that means the key exists somewhere in the cloud outside of your control, which is the opposite of good E2EE.

    You can still achieve both goals, but it would involve you exporting the key, storing it somewhere, and then importing it to a new device from where you stored it.