Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AB
Posts
0
Comments
1,096
Joined
2 yr. ago

  • I disagree. I think both the current state and the state it will change to should be clearly labeled.

    Also - just because everyone is familiar with something doesn't make it a good user experience. We're used to play/pause but it's honestly not very good.

  • IMAP is a mess. None of the major email services or clients properly implement the protocol and pretty much all of the major email services have a proprietary replacement of their own with IMAP as an afterthought. That's why so many of the best email clients these days only work (or only have all features enabled) with Gmail or Office365.

    For the user mostly it's just slow. It can literally take ten seconds just to check if there's any mail and that's if there are no new messages. When there are messages it takes much longer.

    It's not slow because the servers are slow, it's slow because IMAP sucks. Too many requests and the requests are not really in a format that works well for the actual database format used by clients and servers.

    For developers there are bigger problems, it's incredibly difficult to write an IMAP client that even works at all with all with all email servers.

    JMAP fixes all of those issues. It's still not perfect, I think a perfect protocol would use Activity Streams, but it's definitely the best (open) email protocol available right now.

  • It would be nice if none of this was necessary... but we don't live in that world. There is a lot of straight up bullshit in the news these days especially when it comes to controversial topics (like the war in Gaza, or Covid).

    You could go a really long way by just giving all photographers the ability to sign their own work. If you know who took the photo, then you can make good decisions about wether to trust them or not.

    Random account on a social network shares a video of a presidential candidate giving a speech? Yeah maybe don't trust that. Look for someone else who's covered the same speech instead, obviously any real speech is going to be covered by every major news network.

    That doesn't stop a ordinary people from sharing presidential speeches on social networks. But it would make it much easier to identify fake content.

  • Click the padlock in your browser, and you'll be able to see that this webpage (if you're using lemmy.world) was encrypted by a server that has been verified by Google Trust Services to be a server which is controlled by lemmy.world. In addition, your browser will remember that... and if you get a page from the same server that has been verified by another cloud provider, the browser (should) flag that and warn you it might be

    The idea is you'll be able to view metadata on an image and see that it comes from a source that has been verified by a third party such as Google Trust Services.

    How it works, mathematically... well, look up "asymmetric cryptography and hashing". It gets pretty complicated and there are a few different mathematical approaches. Basically though, the white house will have a key, that they will not share with anyone, and only that key can be used to authorise the metadata. Even Google Trust Services (or whatever cloud provider you use) does not have the key.

    There's been a lot of effort to detect fake images, but that's really never going to work reliably. Proving an image is valid, however... that can be done with pretty good reliability. An attack would be at home on Mission Impossible. Maybe you'd break into a Whitehouse photographer's home at night, put their finger on the fingerprint scanner of their laptop without waking them, then use their laptop to create the fake photo... delete all traces of evidence and GTFO. Oh and everyone would know which photographer supposedly took the photo, ask them how they took that photo of Biden acting out of character, and the real photographer will immediately say they didn't take the photo.

  • They have a 100+ mile range and often the same content will be broadcast on a different frequency by other towers. A lot of people would have just switched to the other frequency and moved on with their lives. You might have three frequencies to choose from for the same content.

  • This reminds me of the time someone stole a mango in Australia... obviously, being Australia, the mango was very big.

    It was just a marketing stunt and it backfired — police weren't too happy about wasting their time investigating a fake crime. Even after they were told what happened, they still had phones ringing off the hook with people calling in evidence, wasting government resources.

  • That problem has been solved. A bit over a year ago NIF was able to produce about 3MJ of energy with about 2MJ of input.

    This particular experiment didn't do that but that likely wasn't the goal... They managed a 69MJ output over five seconds... the NIF experiment was less power but over "a few billionths of a second".

    69MJ over five seconds is 13MW which is a very usable amount of power. About on par with the typical output of a real world utility power generator, compared to the old one which was similar to a small lightning strike - impressive but not useful. It's not enough to power an entire city, but you kinda don't want that anyway since a city should have redundancy. Several 13MW generators could power a city with enough excess production to take one or two of them offline for maintenance. If you combined this with solar/wind(*), you could have two or three fusion reactors for a large city.

    The tech is still not ready of course, but it's getting closer and seems to be accelerating too - those two breakthroughs were a year apart. This generator is right in the sweet spot, now they just need to improve efficiency / reliability / reduce costs.

    (* I seriously doubt fusion is ever going to be cheaper than solar / wind / hydro - but it could be more reliable making it a great "baseload" option - enough to keep the lights on, fridges cool, etc)

  • build a dynamic library with a new instantiation, then dynload it and off we go

    I haven't played around with the internals of C++ myself, but isn't that a one way thing? Wouldn't you need to be able to "unload" a query after you're done with it?

    Personally I think child processes are the right approach for this. Launch a new process* for each query and it can (if you choose to go that route) dynamically load in compiled code. Exit when you're done, and the dynamically loaded code is gone. A side benefit of that is memory leaks are contained, since all memory you allocate is about to be removed anyway.

    (*) On most operating systems launching new process is a bit slow, so you likely wouldn't want to do that when the query is requested. Instead you'd maintain a pool of processes that are running and ready to receive a query. That's how HTTP servers are often configured to run. The number of processes "pool" is generally limited by how much memory they need. Is it 1MB per process? 2GB?

    Honestly, I wonder if you could just use an actual HTTP server for this? They can handle hundreds or even thousands of simultaneous requests. They can handle requests that complete in a fraction of a millisecond or ones that run for several hours. And they have good tools to catch/deal with code that segfaults, hits an endless loop, attempts to allocate terabytes of swap, etc. HTTP also has wonderful tools to load balance across multiple servers if you do need to scale to massive numbers of requests.

    I would also seriously consider using JavaScript instead of C++. I hate JavaScript... but modern JavaScript JIT compilers are really special... they apply compiler optimisations AT RUNTIME. So a loop will compile to different machine code if it iterates three times vs three million times. The code is literally recompiled on the fly when the JIT compiler detects a tight loop. Same thing with a function that's called over and over again - it will be inlined if inlining is appropriate.

    As flexible as your system sounds, I suspect runtime optimisations like that would provide real performance advantages. Well optimised C++ code is faster than JavaScript, but you're probably not always going to generate well optimised code.

    JavaScript would also eliminate entire categories of security vulnerabilities. And any time you're generating code on the fly, you really need to be careful about those.

    The good news is if you use a HTTP server like I suggested... then you can literally use any language you want, C++, JavaScript, Python, Rust... you can decide on a case by case basis.

  • Arc has a lot of unique features, but these three are funamental. It took me a couple months to adjust my browsing habits, and it has ruined me. I literally hate using any other browser now.

    Arc syncs tabs between windows. This allows new workflows that just are not possible at all in any other browser. I have five windows open right now, all showing the same set of tabs (which are split into groups, so not all tabs are showing the same group... but some are the same group, for example I opened Sideberry in a new tab in the same group, but I'm reading it in another window). This also obviously syncs between devices, so your work desktop, laptop, phone, gaming pc at home... all have the same tabs. All the time.

    Arc automatically closes tabs for you. That definitely takes getting used to, but once you do get used to it, it's awesome.

    Finally, there are no bookmarks in Arc. Instead of bookmarks you can "pin" any tab which essentially disables auto-close for that tab. Unlike a bookmark, a tab doesn't contain a fixed URL. For example i have a Lemmy tab, which by default is the homepage but it could also be this discussion temporarily, then go back to being the homepage again later today.

    Those three are tied together in a carefully thought out user interface, with a bunch of other nice little touches like the way audio/video is handled if you're not in the tab that's playing media right now.

    There are other major features in Arc too - for example the URL bar is, well, not a URL bar at all. It's a command prompt where the default command happens to be "search the web/go to url". Arc also has a growing set of Large Language Model integrations that might pan out into something interesting one day. And it has some half-baked stuff for teams/collaboration which may or may not eventuate into something interesting.

  • How many individuals can drive cars before congestion makes it impossible

    It's impossible to answer that - there are just too many other variables, such as how far are people travelling each day on average, how many of them are going to the same destination, how many roads are there (not how many lanes, how many roads), etc etc.

    A lot of the problem can be mitigated with zoning rules to encourage people not to travel to the inner city. Whatever reason they might have to go to the CBD should also be available elsewhere in the city if at all possible.

    The fact is trains also have traffic issues and that tends to get a lot worse as you increase the number of train lines in your city. The efficiency of train travel is in part because not many people use that mode of transport. Cities that have 10% of travel by train now probably can't expand that to 80%.

    Diversity is the only option. Give people access to every mode of transit, and let them pick the best one. I'm not from California so I don't know the local issues, but looking at a map I-10 has six train lines that run basically parallel to it. Trains are clearly available so why are people choosing to drive? I'm sure they have a reason. Rather than trying to add more train lines, how about figure out why people are driving that route and tackle it from that perspective? What are they heading into LA for? Can it be done somewhere else?

  • Most daily commuters could get used to a train

    It's definitely not "most". You have to live and work near a train station for that to be viable option. It's not about "getting used to" trains, it's just for most commutes a train simply takes too long - because they don't go directly to your destination.

    In Denmark, which has one of the best transit networks in the world, only 13% of commuting is by public transport. 20% is by bicycle. Cars are 60%.

  • First of all, there are three (popular) hydrogen cars available, and only one of them is from Toyota. And more are scheduled to launch very soon.

    Everyone who's tested those three cars loves them. The Toyota Mirai is supposedly very similar to, and in fact nicer than, the Lexus LS 500 and it's also tens of thousands of dollars cheaper than that car. When (and I believe it is a "when") hydrogen is easier to access, it's going to take off.

    The only real drawback Hydrogen ever had was cost. But that's not an issue anymore, prices have come down a lot. And the "range anxiety" issue is helped tremendously by just having a really really long range. You're only going to fill up twice a month or so.

  • Why not just pass a law that no one can generate electricity except from green sources? It sounds so easy when I put it like that.

    Um - those laws have been passed in many countries. Usually with a reasonable approach such as "you can continue operating the coal plants that were already built, but no more can be built".

    What's actually happening around the world though is those plants are becoming too expensive to run, so they're shutting down even if they are allowed to continue to operate. Renewable power is just cheaper.

    About two thirds of global electricity production is zero emission now and it'll be around 95% in a 25 years or so.

    Source (note: this is a "renewables" article, not a "zero emission" article. Some non-renewable energy produces zero emissions and there's not expected to be much movement on that in the foreseeable future): https://renewablesnow.com/news/renewables-produce-85-of-global-power-nearly-50-of-energy-in-2050-582235/

  • You mean "carbon offset", not "carbon capture". Carbon capture is where you extract carbon out of the air and make concrete or something else out of it. Capture isn't widely done but likely will be soon.

    Carbon offsets are very useful. They paid for a sizeable portion of the solar installation on my home for example. Which has cut my household power emissions by about two thirds and that's with us selling about 80% of the generated power to the grid (where it reduces emissions for other households).