Now that I think of it, it seems to be at the core of some issues with training AI agents using reinforcement learning (e.g., if you choose a wrong metric, you'd get the behavior that makes sense for the agent but not what you want) and with any kind of planned economy (you need targets for planning, but people manipulate them, so you do not get what you want)
That's what's happening with Google and Instagram search algorithms. People figure out how to manipulate them and start spamming. Then the search results deteriorate and you have to modify the algorithm.
Thanks, yes, I saw that one, too, but I liked the emphasis on relationships. The shorter version is easier to get but it does not explain why this happens. E.g., you can observe some relationship (e.g., test results and a student's intelligence) and then you target grades. But then you have an incentive to teach to the test, which breaks down the relationship between test results and intelligence. Other people here gave great examples of relationships that can fail.
What do you guys say after it's over to release the tension? Asking for a friend