Skip Navigation

Posts
2
Comments
440
Joined
2 yr. ago

  • It isn't usually. If it was, the server-side function wouldn't need a constant runtime at different-length inputs since the inputs would not have differing lengths.

    The problem with client-side hashing is that it is very slow (client-side code is javascript (for the forseeable future unless compatibility is sacrificed)), unpredictable (many different browsers with differing feature-sets and bugs), and timing-based attacks could also be performed in the client by say a compromised browser-addon.

    For transit a lot of packaging steps will round off transfer-sizes anyhow, you typically generate constant physical activity up to around 1kB. Ethernet MTU sits at ~1500 bytes for example, so a packet of 200 bytes with a 64 char password or a packet of 1400 bytes with a 1024 char password containing some emoji will time exactly identically in your local network.

  • FTs

    Jump
  • It's not failing in the technical sense, in the same way tech-support-scams aren't a failure of online-banking.

    You can consider the unfixable nature of such scams an inherent flaw of the system, I suppose it is. An inevitable tradeoff for the automated nature such a system has, where a central authority would have the ability to roll things back.

    On the other hand, plenty of online financial scams are not able to be rolled back, often enough banks simply pay you out of an insurance pool. The same could be implemented for blockchains I suppose. Or on top as a regular insurance specialized for "blockchain trading" or whatever. You could also enforce transaction locks, similar to a lot of bank transactions, though that would slow purchases in the same way.

    About banks not running off with stuff, I mean rarely they are but usually not yes. There is a reason the core audience of blockchain technologies are paranoid people.

    The legitimate usecases for fungible blockchain (crypto currencies) is countries (and corporations) regulting and limiting anonymity and even ability of transactions. That has applications from drug purchases (meth) to drug purchases (hormone therapy under anti-lgbt regimes).

    The usecase of blockchain contracts for example is for simple digital trade, currently I can only think of crypto currency exchange, since this fundamentally only makes sense for goods that are themselves on a blockchain.

    The legitimate usecase of non-fungible blockchain (nft) is

  • Most attacks on servers are on the connections. All IPs are owned by entities part of countries, so your IP is always under someones jurisdiction. The same is true for regulsr DNS entries, so the domain of that server.

    For getting the data however, there also isn't any protection in international waters. Someone would just raid you and you could do nothing about it. What good is lawlessness if you don't have the ability to enforce your own "laws" about not having your data taken away?

    You could lay low so noone bothers with that, but then you could also just lay low with regular secretive hosting.

  • FTs

    Jump
  • You can't be 100% sure about organizations following these practices, to the degree that blockchains allow. Organizations aren't fully transparent, and people are fallible.
    I still prefer https over all the secrecy we managed to get in letters before the digital era, even if our audit systems to ensure secrecy of communications then were impressive.

    Even with a perfect audit trails and merge requirements, convincing a small group of people part of the same organization is easier than convincing a larger cryptographically-herded pool of who-knows who.

    You can argue about how likely that is to ever be relevant for practical applications, but it is a system that is perfect in ways its "predecessors" aren't.

  • FTs

    Jump
  • For classical databases there is always someone with root access, who could modify whatever they want.
    In practice, for important stuff, there is a good chance enough people were observing to make a case based on witnesses, but it isn't exactly ideal.
    You don't often get banks running with your money or some storage facility selling your stuff illegally, but it could happen. And that is enough for some (paranoid) people. Maybe some day there might even be applications that would not otherwise be feasible due to fear of scams.
    There is a usecase for crypto currencies, so why not the highly related NFTs where the only difference is that the stuff you own is a unique thing (like a title) instead of a bunch of non-unique things (like currency).

  • Ask A&W

    Jump
  • Yeah most people would think 4 is more than 3! while 3! is actually 50% more than 4.

  • The admin team is distributed and the infra is in europe iirc.
    So no

  • You can easily get the hash of whole files, there is no input size constraint with most hashing functions.
    Special password hashing implementations do have a limit to guarantee constant runtime, as there the algorithm always takes as long as the worst-case longest input. The standard modern password hashing function (bcrypt) only considers the first 72 characters for that reason, though that cutoff is arbitrary and could easily be increased, and in some implementations is. Having differences past the 72nd character makes passwords receive the same hash there, so you could arbitrarily change the password on every login until the page updates their hashes to a longer password hashing function, at which point the password used at next login after the change will be locked in.

  • Cryptographic hash functions actually have fixed runtime too, to avoid timing-based attacks.
    So correct password implementations use the same storage and cpu-time regardless of the password.

  • That is a huge red flag if ever given as a reason, you never store the password.
    You store a hash which is the same length regardless of the password.

  • it wouldn't though, it would be like 7.5 parts milk to 3 parts flour to almost a part oil to half a part sugar.
    And that still being quite imprecise, using 22g or 26g sugar makes a change in taste I wouldn't want to happen uncontrolled at random. I'm also closer to 41g oil these days, wouldn't want to use 50 to make it fit some very coarse division.
    Scoops of stuff also seems very imprecise. Are they at least levelled?

    I also use "a pinch of salt", which doesn't have to be very precise, but if someone were to ask I could tell them "roughly 0.2g", from having just measured it. I still remembr how much I hated descriptions like "a pinch" as a cooking novice, and now I can simply measure my pinch on a scale and others can confirm their pinch on their scale until it about matches 0.2g too. How would that work in imperial?

  • l is lowercase, an kl is not used. A kl is a m³, which water utilities charge by, and pools and interior volume are measured in.

  • I have 1l milk and 1kg flour. My recipe wants ⅜ liter milk and 150g flour. 375ml is a bit odd but trivial ultimately, and very easy to measure when I just pour 375g into my blender on a scale.
    Now how would imperial cups deal with 150g from 1kg?
    I also have 45g oil, what odd measurements would that give when you try to divide it up without a single decimal number?
    Try 24g suggar.

    I'd love to see all that converted to imperial.

  • This is a great second argument for using weight not volume for measurements.
    Measuring mass is of course not viable, but measuring weight in a consistent location means all the ratios end up correct. While ratios between volume and weight measured substances change (and flour probably compacts differently).
    That is why one should always use a scale to measure their fluids, and why metric is superior where 375ml of water or milk are 375g (convert the recipe ahead of time at a reference location), making this trivially easy.

    If you wish to then correct the total mass of your dish, you can simply compare the weight and volume of water to work out the mass to weight ratio and correct accordingly.

  • I have that exact setup working. qbittorrent (and -nox) are a lot more involved to set up with I2P, but there is some material on how and once you get it running it works quite well at this point.

    I don't use docker for it, but that should work too. For browsing I use a maintained fork of proxy switchy omega, which allows to choose a proxy profile based on the url, making it easy to pipe i2p pages into the i2pd socks port (I use I2Pd not I2P, don't think it matters much). qbittorrent can be configured in the same way to statically use the the local (4447 on i2pd) port as a proxy to prevent any clearnet communication. In addition it needs the dedicated I2P host 127.0.0.1 and port 7656 (the sam bridge, giving deeper access to I2P).

    Don't expect to do anything on the clearnet over I2P, the exits are not good and it's not what I2P is meant for. For that reason don't set I2P up as something like a system proxy/vpn, instead pipe the specific programs you want using I2P into the proxy ports using their proxy settings.

    To get rid of the firewalled status in the I2P daemon, you will need to forward ports. Maybe you have seen advice for servers that are not behind a firewall and nat, so that effectively have all ports "forwarded" already. The mythical dedicated IPv4 address.
    In your case you need to pick the port your I2P daemon uses for host to host communication randomly, then forward both TCP and UDP for it on IPv4. Also make sure you even can forward ports, depending on region ISPs no longer hand out dedicated IPv4 even per router, so you might have to specifically ask your ISP for one (I had to). But that is all generic hosting, if you can set up a minecraft server you can make I2P have full connectivity.

  • *500 000 quettabytes
    *Sextillion = 10^21 ( = Zetta)

    I'd recommend wikipedia here, your source seems to have taken 3 years to update their table and their image is still outdated.

    They likely didn't use quetta because it was only added 3 years ago, and is still not widely known. Or maybe it sounded better.