Google's AI now listens to your English language phone conversations
StarDreamer @ stardreamer @lemmy.blahaj.zone Posts 0Comments 167Joined 2 yr. ago

StarDreamer @ stardreamer @lemmy.blahaj.zone
Posts
0
Comments
167
Joined
2 yr. ago
Deleted
Permanently Deleted
Deleted
Permanently Deleted
Deleted
Permanently Deleted
Deleted
Permanently Deleted
Deleted
Permanently Deleted
I may be biased (PhD student here) but I don't fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it's not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.
What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google's use then it's perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there's nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.
Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don't know so it's hard to predict anything meaningful.
As for the more "harm than good" analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that's the case the whole is considered unethical.