How did websites like TinEye recognize cropped photos of the same image (and other likened pictures), without the low-entry easyness of LLM/AI Models these days?
How did websites like TinEye recognize cropped photos of the same image (and other likened pictures), without the low-entry easyness of LLM/AI Models these days?
https://vincmazet.github.io/bip/filtering/fourier.html
There are ways to encode images that make it easier to isolate differences in cropping and resolution and rotation. Like how if you wanted to search for color filtered images you could just throw out the color and compare them in black and white.