Adding a feature because ChatGPT incorrectly thinks it exists
Adding a feature because ChatGPT incorrectly thinks it exists
Adding a feature because ChatGPT incorrectly thinks it exists | Holovaty.com
Adding a feature because ChatGPT incorrectly thinks it exists
Adding a feature because ChatGPT incorrectly thinks it exists | Holovaty.com
I think, therefore I add features.
Normally people use ChatGPT to vibe code, this is the first instance I'm aware of of ChatGPT using people to vibe code!
Chatgpt is just enforcing 4th law of robotics.
I actually had this happen more than once because of a coworker that uses Copilot and it would help him with code and use functions not in existance. He has since finally stopped trusting Copilot code without more testing.
Worse, they typically do exist, just in training data made up of others' code. Sometimes if you put the function names into GitHub search, you can find them.
In soviet russia, ChatGPT uses you for vibe coding!
Fascinating! Because this notation is already used by another tool (and possibly more), it might not be as silly as it sounds. From the headline it sounds like some really weird API was added to something.
Happy to see this sort of optimism in the wake of AI causing people's programs to get bad reviews because the AI thinks they can do things they can't.
Thanks for the share. Maybe I'm looking too far into it, or just in one of those moods, but this really is oddly inspirational to me.
It's not really used by "a tool", it's a customary txt-format notation for guitar. People use it to learn/teach how an existing song is played, it's not intended to actually notate music.
It is inspiring to me
This is how ChatGPT makes a feature request.
Better flood them with interested users than ask for thumbs up on a ticket.
ChatGPT is just fancy autocomplete, so it probably got the notation from somewhere else; it’s not really capable of inventing new stuff on its own (unless it hallucinates). It would be interesting to ask it where it saw that notation in the past if you didn’t support it before, but in a way, you could say it’s a standard form of notation (from a different service).
You know it's not strictly auto completing sentences that previously existed, right? It's using words that it anticipates should follow others. I've had it suggest code libraries that don't exist, and you'll hear about people going to the library to ask for books that haven't been written but supposedly by real authors, and it sounds like something they would write.
Tab music notation is super common, and although it wasn't supported by this particular service before, you could see where it might be the sort of request people make, and so chatgpt combined the two.
Iirc Wikipedia supports it for tab notation
Personally I much prefer lilypond. I wonder if this tool supports lilypond. Would love to have a workflow to scan sheet music and get lilypond out the other end.
My feelings on this are conflicted. I’m happy to add a tool that helps people. But I feel like our hand was forced in a weird way.
Oh really? You won't tell us that you're not happy about the free marketing/traffic...
Traffic with complete misinformed expectations is generally undesirable. There have been reports of places getting lots of negative reviews because of this.
TBH, this is barely any different from marketing promising that a product will have a feature that the development team only find out about later purely by accident when upper management asks about it.
It's worse. This is like the restaurant across the street, a company that is completely unaffiliated with me and even my industry, is now running a lunch promotion that includes advertising something for my business which I did not approve and do not sell.
If a human being did this, it would be so unbelievably grossly obviously illegal it wouldn't even have to go to court. It's obvious and blatant fraud. Lying on this level is like so unbelievably blasphemous I would go so far as to say that this is uncivilized wild animal behavior that far precedes modern copyright/property laws, having been frowned upon in almost every society since the dawn of civilization.
As a sidenote, that is exactly how most (all?) food delivery apps operate, presumably with no AI involved: Don't want to list your restaurant on an app, no problem, someone unaffiliated will create listing in your name. You'll still get customers complaining about high delivery prices, incorrect menu items and poor quality, while the app collects commissions and refuses to enforce any kind of control. Basically pressuring restaurants to pay for a listing on the app just to have any control over how their name is used.
ChatGPT didn't "think" anything. It generated instructions telling users to do things incorrectly based on the human-generated content in its training data, which it didn't understand because it doesn't understand anything.
Something like this should be a warning label on AI