I asked 6 questions on StackOverflow. 3 in 2010 and 3 in 2011.
For context; I gave 183 answers.
I can agree with most questions having already been asked.
Moreso, most questions on StackOverflow can be answered with some context knowledge or some reading of official docs or references, or trying out. I've not felt the need to ask anything.
I've never seen a need for localization beyond domain terminology. And I think it would be a huge detrimental.
To implement it would be unnecessary significant complexity. Effort better spent elsewhere. And for programmers it'd be confusing. Think code snippets, mixing content, and the need for reserved word expansion or exclusive parsing scopes that would be even more complex and confusing.
At work, we recently talked about AI. One use case mentioned (by an AI consulting firm, not us or actually suggested for us) was meeting summaries and extracting TODOs from them.
My stance is that AI could be useful for summaries about topics so you can see what topics were being talked about. But I would never trust it with extracting the or all significant points, TODOs, or agreements. You still need humans to do that, and have explicit agreement and confirmation of the list in or after the meeting.
It can also help to transcribe meetings. It could even translate them. Those things can be useful. But summarization should never be considered factual extraction of the significant points. Especially in a business context, or anything else where you actually care about being able to trust information.
I wouldn't [fully] trust it with transforming facts either. It can work where you can spot inaccuracies (long text, lots of context), or where you don't care about them.
Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.
At least that's a testament to neutrality - in a shitty way.