Automated CI/CD Data Snapshots
rhymepurple @ rhymepurple @lemmy.ml Posts 2Comments 61Joined 4 yr. ago
There are several proprietary options (many/most of which you cannot host). Looking for Amazon Wishlist alternatives should help in putting together a list of potential options. Some additional projects which are open source and selfhostable that you could also start with include:
Everything I mentioned works for LAN services as long as you have a domain name. You shouldn't even need to point the domain name to any IP addresses to get it working. As long as you use a domain registrar that respects your privacy appropriately, you should be able to set things up with a good amount of privacy.
Yes, you can do wildcard certificates through Let's Encrypt. If you use one of the reverse proxies I mentioned, the reverse proxy will create the wildcard certificates and maintain them for you. However, you will likely need to use a DNS challenge. Doing so isn't necessarily difficult. You will likely need to generate an API key or something similar at the domain registrar or DNS service you're using. The process will likely vary depending on what DNS service/company you are using.
Congrats on getting everything working - it looks great!
One piece of (unprovoked, potentially unwanted) advice is to setup SSL. I know you're running your services behind Wireguard so there isn't too much of a security concern running your services on HTTP. However, as the number of your services or users (family, friends, etc.) increases, you're more likely to run into issues with services not running on HTTPS.
The creation and renewal of SSL certificates can be done for free (assuming you have a domain name already) and automatically with certain reverse proxy services like NGINXProxyManager or Traefik, which can both be run in Docker. If you set everything up with a wildcard certificate via DNS challenge, you can still keep the services you run hidden from people scanning DNS records on your domain (ie people won't know that an SSL certificate was issued for immich.your.domain). How you set up the DNS challenge will vary by the DNS provider and reverse proxy service, but the only additional thing that you will likely need to set up a wildcard challenge, regardless of which services you use, is an email address (again, assuming you have a domain name).
I was looking for a free opensource sharing plateform first
What type of sharing platform are you looking for? A git repo? A single file sharing service? A code/text snippet sharing service? Something else?
There are many options available. Some have free, public instances available for use. Others require you to self host the service. Regardless, you're not stuck using Github just to share your user.js
file.
the only sites I give permenant cookie exception are my selfhosted services
This is what I was referring to. How are you accomplishing this?
I'm still looking for the switches to block all new requests asking to access microphone, location, notification
I can't help with this at the moment, but if you're still struggling with this I can provide the lines required to disable these items. However, I don't know how to do this with exceptions (ie allowing your self hosted sites to use that functionality, but block all other sites). At minimum though you could require Firefox to ask you every time a site wants to use something. This may get repetitive for things like your self hosted sites if you have everything clearing when you exit Firefox.
Didn't look at the repo thoroughly, but I can appreciate the work that went into this.
- Is there any reason you went this route instead of just using an
user-overrides.js
file for the standard arkenfoxuser.js
file? - Does the automatic dark theme require enabling any fingerprintable settings (beyond just possobly determining the theme of the OS/browser)?
- How are you handling exceptions for sites? I assumed it would be in the
user.js
file, but didn't notice anything in particular handling specific URLs differently.
https://github.com/owntracks/android
The F-Droid version (which is available on IzzyOnDroid's repo) utilizes OSM. You'll need a server to sync the data to though and it likely does not have all of the features that Life360 has.
How do you use your Beelink? More specifically what OS (and maybe core/most used apps) do you have installed? How do you interact with it (eg - wireless keyboard/mouse, USB IR receiver, etc.)?
Any downside to this approach compared to using the Smart TV/Android TV/Apple TV features?
Calls made from speakers and Smart Displays will not show up with a caller ID unless you’re using Duo.
Is it possible to use Duo still? Google knows it discontinued/merged Duo with Google Meet nearly 18 months ago, right?
I'm still not sure what point you are trying to make. Your initial claim was:
Although Mozilla encrypts the synced data, the necessary account data is shared and used by Google to track those.
@utopiah@lemmy.ml asked:
Are you saying Firefox shares data to Alphabet beyond Google as the default search engine? If so and if it applies to Sync (as if the question from OP here) can you please share sources for that?
You stated:
Mozilla does, sharing your account data
You also provided evidence that Mozilla uses Google Analytics trackers on the Firefox's product information website. I mentioned that it's not sufficient evidence of your claim as the trackers are independent of Firefox the browser and Sync. Additionally, the use of trackers for websites is clearly identified on Mozilla's Privacy Policies and there is not much else mentioned on the Privacy Policies outside of those trackers and Google's geolocation services in Firefox.
You've also mentioned Google's contract with Mozilla, which is controversial for many people, but isn't evidence of Mozilla providing user data to Google even in conjunction with the previously mentioned trackers. You then discussed various other browsers, but I'm not sure how that is relevant to your initial claim.
While it seems we can both agree that Mozilla and it's products are far from perfect, it is looking like your initial claim was baseless as you have yet to provide any evidence of your initial claim. Do you have any evidence through things like code reviews or packet inspections of Firefox or Sync that hints Mozilla is sharing additional information to Google? At this point, I would even accept a user(s) providing evidence of some weird behavior like the recent issue where google.com wouldn't load in Firefox on Android if someone could find a way to connect the weird behavior to Mozilla sharing data with Google.
I don't understand what point you are trying to make. Mozilla has several privacy policies that cover its various products and services which all seem to follow Mozilla's Privacy Principles and Mozilla's overarching Privacy Policy. Mozilla also has documentation regarding data collection.
The analytics trackers that you mentioned would fall under Mozilla's Websites Privacy Policy, which does state that it uses Google Analytics and can be easily verified a number of ways such as the services you previously listed.
However, Firefox sync uses https://accounts.firefox.com/ which has its own Privacy Policy. There is some confusion around "Firefox Accounts" as it was rebranded to "Mozilla Accounts", which again has its own Privacy Policy. There is no indication that data covered by those policies are shared with Google. If Google Analytics trackers on Mozilla's website are still a concern for these services, you can verify that the Firefox Accounts and Mozilla Accounts URLs do not contain any Google Analytics trackers.
Firefox has a Privacy Policy as well. Firefox's Privacy Policy has sections for both Mozilla Accounts and Sync. Neither of which indicate that data is shared with Google. Additionally, the data stored via the Sync service is encrypted. However, there is some telemetry data that Mozilla collects regarding Sync and more information about it can be found on Mozilla's documentation about telemetry for Sync.
The only thing that I could find about Firefox, Sync, or Firefox Accounts/Mozilla Accounts sharing data with Google was for location services within Firefox. While it would be nice for Firefox not to use Google's geolocation services, it is a reasonable concession and can be disabled.
Mozilla is most definitely not a perfect company, even when it comes to privacy. Even Firefox has been caught with some privacy issues relatively recently with the unique installation ID.
Again, I'm not saying that Mozilla is doing nothing wrong. I am saying that your "evidence" that Mozilla is sharing Firefox, Sync, or Firefox Accounts/Mozilla Accounts data with Google because of Google Analytics trackers on some of Mozilla's websites is coincidental at best. Without additional evidence, it is misleading or flat out wrong.
I'm not disputing the results, but this appears to be checking calls made by Firefox's website (https://www.mozilla.org/en-US/Firefox/) and not Firefox, the web browser application. Just because an application's website uses Google Analytics does not mean that the application shares user data with Google.
Change Detection can be used for several use cases. One of them is monitoring price changes.
Some additional ideas for the Protectli device:
- backup/redundant OPNsense instance for high availability
- backup NAS/storage
- set it up at a family/friend's house
- a test/QA device for new services or architecture changes
- travel router/firewall
- home theater PC
- Proxmox/virtualization host
- Kubernetes cluster
- Tor, I2P, cryptocurrency, etc. node
- Home Assistant
- dedicated local STT/TTS/conversation agent
- NVR
- low powered desktop PC
There are so many options. It really depends on what you want, your other devices, the Protectli's specs, your budget, etc.
Could you explain further? Wouldn't this just need to be setup once per server that OP wants to connect?
Could you use symlinks? Not sure what the "gotchas" or downside to this approach is though.
tl;dr: A notable marketshare of multiple browser components and browsers must exist in order to properly ensure/maintain truly open web standards.
It is important that Firefox and its components like Gecko and Spidermonkey to exist as well as maintain a notable marketshare. Likewise, it is important for WebKit and its components to exist and maintain a notable marketshare. The same is true for any other browser/rendering/JavaScript engines.
While it is great that we have so many non-Google Chrome alternatives like Chromium, Edge, Vivaldi, etc., they all use the same or very similar engines. This means that they all display and interact with websites nearly identically.
When Google decides certain implementation/interpretation of web standards, formats, behavior, etc. should be included in Google Chrome (and consequently all Chromium based browsers), then the majority marketshare of web browsers will behave that way. If the Chrome/Chromium based browsers reaches a nearly unanimous browser marketshare, then Google can either ignore any/all open web standards, force their will in deciding/implementing new open web standards, or even become the defacto open web standard.
When any one entity has that much control over the open web standards, then the web standards are no longer truly "open" and in this case becomes "Google's web standards". In some (or maybe even many) cases, this may be fine. However, we saw with Internet Explorer in the past this is not something that the market should allow. We are seeing evidence that we shouldn't allow Google to have this much influence with things like the adoption of JPEG XL or implementation of FLoC.
With three or more browser engines, rendering engines, and browsers with notable marketshares, web developers are forced to develop in adherence to the accepted open web standards. With enough marketshare spread across those engines/browsers, the various engines/browsers are incentivized to maintain compatibility with open web standards. As long as the open web standards are designed and maintained without overt influence by a single or few entities and the open standards are actively used, then the best interest of the collective of all internet users is best served.
Otherwise, the best interest of a few entities (in this case Google) is best served.
I agree that Home Assistant's audit is a good thing. While I love that Home Assistant is open source, I'm not sure how that impacts the audit. Proprietary, closed source software can be audited with few differences from an open source software's audit. The biggest difference is that you, myself, or anyone could audit open source software, but it would not be easy for that to happen with closed source software.
Alerts, notifications, person recognition, object recognition, motion detection, two way audio, automated lights, event based video storage, 24/7 video storage, automated deletion of stale recorded video, and more can all be accomplished 100% locally.
Granted, much of this functionality is not easily accomplished without some technical knowledge and additional hardware. However, these posts typically are made by people who state to at least have an interest in making that a reality (as this one does).
What security benefits does a cloud service provide?
Thanks for the reply! I am currently looking to do this for a Kubernetes cluster running various services to more reliably (and frequently) perform upgrades with automated rollbacks when necessary. At some point in the future, it may include services I am developing, but at the moment that is not the intended use case.
I am not currently familiar enough with the CI/CD pipeline (currently Renovatebot and ArgoCD) to reliably accomplish automated rollbacks, but I believe I can get everything working with the exception of rolling back a data backup (especially for upgrades that contain backwards incompatible database changes). In terms of storage, I am open to using various selfhosted services/platforms even if it means drastically changing the setup (eg - moving from TrueNAS to Longhorn, moving from Ceph to Proxmox, etc.) if it means I can accomplish this without a noticeable performance degradation to any of the services.
I understand that it can be challenging (or maybe impossible) to reliably generate backups while the services are running. I also understand that the best way to do this for databases would be to stop the service and perform a database dump. However, I'm not too concerned with losing <10 seconds of data (or however long the backup jobs take) if the backups can be performed in a way that does not result in corrupted data. Realistically, the most common use cases for the rollbacks would be invalid Kubernetes resources/application configuration as a result of the upgrade or the removal/change of a feature that I depend on.