Do Users Write More Insecure Code with AI Assistants?
Do Users Write More Insecure Code with AI Assistants?
~n (@nblr@chaos.social)
cross-posted from: https://programming.dev/post/8121843
~n (@nblr@chaos.social) writes:
This is fine...
"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
This is just an extension of the larger issue of people not understanding how AI works, and trusting it too much.
AI is and has always been about exchanging accuracy for speed. It excels in cases where slow, methodical work is not given sufficient time already, because the accuracy is already low(er) as a result (e.g. overworked doctors examining CT scans).
But it should never be treated as the final word on something; it's the first ~70%.
I feel like I've been screaming this for so long and you're someone who gets it. AI stuff right now is pretty neat. I'll use it to get jumping off points and new ideas on how to build something.
I would never ever push something written by it to production without scrutinizing the hell out of it.
Didn’t it turn out that the CT scan analysis thing was just the model figuring out the rough age of machine, becuse older machines tend to be in poorer places with more cancer and are more likely to only be used on serious illnesses?
If taking into account the older machines results in better healthcare, that seems like a great thing to be discovered as a result of the use of machine learning.
Your summary sounds like it may be inaccurate, but it's interesting enough for me to want to know more.
It's a decent first screen for pattern recognition for sure, but it is fast which is where I see most of its value. It can process information that people would never get to.