How feasible is privacy-respecting personalised search engine results?
How feasible is privacy-respecting personalised search engine results?
The issue with Google's personalised search results is, imo:
- Not only is it not opt-in, but you can't even opt out of it. Personalised search results should be opt-in and disabled by default.
- The data kept on you is used to sell you ads
- The data kept on you will be handed over to state entities fairly easily
Given those three problems, how feasible would it be to self-host a search engine that personalises your results to show you things that are more relevant to you? Avoiding issues 1 & 2 as you're self-hosting so presumably you have made the decisions around those two things. And issue 3 is improved as you can host it off-shore if you are concerned about your domestic state, and if you are legally compelled to hand over data, you can make the personal choice about whether or not to take the hit of the consequences of refusing, rather than with a big company who will obviously immediately comply and not attempt to fight it even on legal grounds.
A basic use-case example is, say you're a programmer and you look up ruby
, you would want to get the first result as the programming language's website rather than the wikipedia page for the gemstone. You could just make the search query ruby programming language
on any privacy-respecting search engine, but it's just a bit of QoL improvement to not have to think about the different ways an ambiguous search query like that could be interpreted.
Self hosting search engines is very hard. The scraping, indexing and storage requirements are immense. You could definitely self-host a front end (with your QoL improvements), but the back end search engines (Bing/Google/etc) will be able to track you all the same.
Are there even open source indexing software available?
There's YaCy. I've run a node for a while but it ended up filling up my server's drive just indexing german wikipedia and the results were terrible.
And it's still not private because you have to broadcast the query across the network.
None that im aware of. There are webscrapers, and I guess you could just webscrape and dump the results into a postgres db and use it to index. But I'm guessing you'll eventually want something more tuned/custom? But even if it existed, there is the discovery problem. How do you find the sites to scrape? Bing and google both let site operators submit urls, but that isn't gonna scale to self-hosting.
Stract, Marginalia, Wiby, Mwmbl, etc
The two first are NLnet funded and the second one is one of the best developed despite it uses Java in contrast to Rust. I see the developer taking the development very seriously.
That's a good point, I forgot that stuff like SearXNG are only frontends so in order to add personalisation to them you'd have to modify your queries to Bing/Google/etc I assume, rather than do what Google etc do with whatever algorithm they use for providing search results.