Prendre l'avion quatre fois dans sa vie : près de la moitié des Français disent adhérer, selon un sondage
Bonjour à toi et désolé si je t'ai fais du mal avec mon commentaire. On peut effectivement débattre de ce point si tu le souhaites, mais en démarrant de manière aussi aggressive ça va être difficile de me faire entendre tes arguments :)
Je suis totalement contre. Ça va faire chier la majorité de la population pour pas changer grand chose in fine, car les gros pollueurs (genre Bolloré avec son jet privé) vont continuer de prendre l'avion plusieurs fois par jour, sans être impacté par cette idée. On tire encore une fois dans la mauvaise direction.
Je soutiens qu'il y a quand même un effort a faire pour privilégier d'autres modes de transport, mais perso je prends l'avion 1 fois par an pour partir à l'étranger, et c'est parce que les alternatives à l'avion sont vraiment pourries, et ce même en Europe. Travaillons là dessus déjà.
Mail tester is good, and I'll add MX Toolbox which can also check a lot of other DNS settings, and help with email deliverability.
Congratulations! A mail server is quite demanding in terms of initial setup, but it's also very rewarding !
Here are a few pointers I can give you:
- Using a good domain is important, some provider block entire TLDs for cheap domains (eg. .tk or .pw). I learnt it the hard way...
- Set your MX records to A records, not CNAME
- Ensure your PTR records match your A records for the mail server
- Learn about SPF and DKIM
- Set them up, and verify with mxtoolbox
- Use the
ip4:
and/orip6:
selectors for SPF - Setup a spamfilter (I like spamassassin)
- Leave it all running for a few weeks/months
- Publish a DMARC policy on your DNS, and verify with mxtoolbox
This should limit a lot your likeliness to end up in spam folders (which is usually the hardest part about running your mail server)
Not trying to discourage you
Well, that's exactly what it sounds like :/
Don't listen to him OP, running your own email server is not "a world of hurt".
The initial configuration involves quite a few things (DNS records, DKIM, spam filters, ...) But it's definitely manageable. And when all this is setup, you don't have to touch it anymore, it just works!
I've been doing it for years now, and I'm not going back ! Congratulations on doing it, and good luck on keeping it running!
It's instant as well in my case, but I don't have a huge amount of logs yet. I'm still figuring out this whole setup and what are it's strength and weaknesses.
I'm using influxdb 1.8 though (which is old), because that's the version shipped with openbsd repos. It crashes fairly often when you perform "illegal" operations, which is annoying. Like, the DELETE FROM
command only lets you use the time
field in the WHERE
clause. Using any other field would crash the DB. I might recompile it from scratch at some point because it lacks too many features from upstream. But for now, it does a decent job, and is really easy to setup (this was the killer feature for me).
You'll want to check this out: https://www.tumfatig.net/2022/ads-blocking-with-openbsd-unbound8/
That's the post I took inspiration from for this setup. It does use collectd and custom awk scripts for log ingestion though, where I simply use telegraf.
I've just started digging into it myself ! Here's my current setup (I'll see how it scales in the long term):
- syslog on every host
- Telegraf collects and parse logs
- InfluxDB stores everything
- Grafana for dashboards
I run OpenBSD on all my servers, and configure all the services to log via syslog.
Then I configuré syslog to send only those I care about (https, DNS, ...) to a central telegraf instance, using the syslog protocol (RFC3164).
On this collector, telegraf gets all these logs and parse them using custom grok patterns I'm currently building, to make sense out of every log line it receives. The parsed logs are in turns stored in Influxdb, running on the same host.
I then use Grafana to query InfluxDB and create dashboards out of these logs. Grafana can also display the logs "as-is" so you can search through them (it's not ideal though as you simply search by regex from the full message, so it's on par with grep at least).
This setup is fairly new and seem to work very well. Telegraf is also very low on resource usage for now. I'll have to continue adding grok patterns and send more application logs to it to see how it handles the load. I do have a few questions still unanswered for now, but time will tell:
Q: Should I first collect via a central syslog before sending to telegraf ?
This would let syslog archive all logs in plain text, rotate and compress them. I would also only have a single host to configure for sending logs to telegraf. However this would eat up space, and could hide the original sending hostname for each log. I might try that someday.
Q: Should I run telegraf on each host ?
This would distribute the load of the grok parsing amongst all hosts, and then all telegraf processes will send directly to the central one for collection, or even directly into influxdb. I would also benefit from telegraf being install on each host to collect more data (CPU, network stats, ...). However it makes the configuration more complex to handle.
Q: What is a good retention period ?
For now, influxDB doesn't expire any data, as I don't have much yet. In the long run, I should probably delete old data, but it's hard to tell what is "old" in my case.
Q: Do I need an interface to read logs ?
I use this setup mostly for graphs, as grafana can make sense of fields like "http_verb", "http_code" and such. However, it is much more practical for me to dig into the logs right on the server, in /var/log
. Having an interface like chronograf or graylog seems practical, but I feel like it's overdoing it.
Bonus:
You don't need to access a .onion instance to use Tor. You can simply perform your day-to-day web usage through Tor directly.
On your phone, you can even use Tor natively with most of your apps.
Also works as PLC, how useful !
You must be fun at parties
And followed its white rabbit.
Definitely a nice move as I couldn't browse with netsurf/Dillon ! But why old.lemmy.sdf.org ? I get that it's the old reddit interface, but hey, why not just "alt.lemmy.sdf.org" ?
Do not use Dendrite for multi-user setups if you plan to run bridges. Contacts handled by the bridges are visible by the whole server, which means that it leaks many information on your contacts (names, phone number, ...). I'm also not sure that multi-user puppeting is supported with dendrite.
I would advise you to run Synapse because of that.
I found how to parse and tokenize logs withing telegraf. One must use grok patterns to parse the logs. Here is the config sample I use:
# bind locally to ingest syslog messages [[inputs.syslog]] server = "udp://<ipaddress>:6514" syslog_standard = "RFC3164" [[processors.parser]] parse_fields = ["message"] merge = "override" data_format = "grok" grok_patterns = ["%{HTTPD}", "%{GEMINI}"] # this must reference the name from grok_custom_patterns # format; PATTERN_NAME GROK_PATTERN… grok_custom_patterns = ''' HTTPD ^%{HOSTNAME:httphost} %{COMBINED_LOG_FORMAT} (?:%{IPORHOST:proxyip}|-) (?:%{NUMBER:proxyprot}|-)$ GEMINI ^(?:\"(?:gemini\:\/\/%{HOSTNAME:gmihost}(:%{NUMBER:gmiport})?%{NOTSPACE:request}|%{DATA:raw_request})\" %{NUMBER:response} %{NUMBER:bytes}|%{DATA})$ ''' # send parsed logs to influxdb [[outputs.influxdb]] urls = ["http://localhost:8086"] database = "telegraf"
Telegraf supports logstash core patterns, as well as its own custom patterns (like %{COMBINED_LOG_FORMAT}
).
You can then query your influxdb using the fields extracted from these patterns:
> USE telegraf > SELECT xff,httphost,request FROM syslog WHERE appname = 'httpd' AND verb = 'GET' ORDER BY time DESC
It does help thank you ;)
I've found that you can use custom grok patterns to parse logs just as grayling extractors do. I'm still trying to figure it out, but so far I could start parsing logs using a [[processor.parser]]
block. I'll document my findings when I get it working as I want it.
I store and query them using influxdb. I checked Loki but apparently it's main feature is that it store the message as a single field, this not parsing the log at all. I didn't know about Promtail. Is it better suited than influxdb for my usecase ?
Thanks for the feedback. I didn't try it because I didn't want to buy Sailfish OS (again...) just to end up with a broken phone, and rollback to android, especially as it breaks the warranty. I figured I could just wait for the next update for these issues to be fixed, but they never came and I simply did not bother getting the test image at this point.
Would you have a link to the patch for the camera ?
Évidemment qu'il a raison sur le fond, c'est simplement la forme sur laquelle j'ai tilté. Je suis conscient de l'impact de l'avion sur mon empreinte globale, mais à ce jour je suis au "minimum" vis-à-vis de mes besoins/envies. À côté de ça je fais un maximum d'effort sur le reste pour limiter mon impact. Si Teolan se prive de voyager par principe écologique, je lui tire mon chapeau, car ça demande une forte volonté et de gros sacrifices. Perso je suis pas encore prêt à faire entièrement une croix dessus.