Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ZI
Posts
0
Comments
70
Joined
2 yr. ago

  • Correct me if I’m wrong

    Well actually, yes, I'm sorry to have to tell you are wrong. Shannon-Fano coding is suboptimal for prefix codes and Huffman coding, while optimal for prefix-based coding, is not necessarily the most efficient compression method for any given data (and often isn't).

    Huffman can be optimal given certain strict constraints, but those constraints don't always occur in natural/real- world data.

    The best compression method (whether lossless or lossy) depends greatly on the nature of the data to be compressed. Patterns and biases can make certain methods much more efficient (or more practical) in some cases, when they might be useless elsewhere or in general. This is why data is often transformed before compression, using a reversible transformation that "encourages" certain desirable statistical characteristics in the data, so the compression method can better exploit them.

    For example, compression software (e.g. gzip) may perform a Burrows-Wheeler transform and other encodings before applying Huffman coding to get a better compression ratio. If Huffman coding was an optimal compression method for all possible data, this would be redundant! Often, E.g. in medical imaging, audio/video data, the data is best analysed in a different domain to better reveal the underlying patterns and redundancies in the data so they cam be easily exploited by compression. E.g. frequency domain instead of time/spatial domain.

  • Damn, so what's the name of the shape that's a flat donut with an inner and outer circular perimeters? i.e. a filled circle with another smaller radius circular area subtracted from it. Or 2D cross section of a torus seen perpendicularly to the plane that intersects the widest part of the torus. A squished donut, or chubby circle, if you like.

  • And many "circles" aren't circles either, but 2D torus approximations. The edge of a true circle is made of infinitesimally small points so would be invisible when drawn. And even if you consider a filled circle, how could you be sure you aren't looking at a 1-torus with an infinitessimally small hole? Or an approximation of all the set of all points within a circle?

    Clearly, circles are a scam.

  • No, I'm not. Saying Solution B is economically more feasible than Solution C is not an argument in favour of Solution A, even if A is cheaper than B or C. Because cost argument is not the only factor.

    Had you actually read my comment, you'd see I'm pro-nuclear, and even more pro-renewables.

    Why don't you check your own biases and preconceptions for a second and read what I actually wrote instead of what you think I wrote. I could just as easily call you an anti-renewable shill for nuclear pollution, using precisely the same argument you used. It's not valid.

    Hint: if you ever find yourself arguing with "people like you..." -- you've lost the argument. Try dropping the right-wing knee-jerk rhetoric and start thinking.

  • Unfortunately, it's not as simple as that. Theoretically, if everyone was using state-of-the-art designs of fast-breeder reactors, we could have up to 300,000 years of fuel. However, those designs are complicated and extremely expensive to build and operate. The finances just don't make it viable with current technology; they would have to run at a huge financial loss.

    As for Uranium for sea-water -- this too is possible, but has rapidly diminishing returns that make it financially unviable quite rapidly. As Uranium is extracted and removed from the oceans, exponentially more sea-water must be processed to continue extracting Uranium at the same rate. This gets infeasible pretty quickly. Estimates are that it would become economically unviable within 30 years.

    Realistically, with current technology we have about 80-100 years of viable nuclear fuel at current consumption rates. If everyone was using nuclear right now, we would fully deplete all viable uranium reserves in about 5 years. A huge amount of research and development will be required to extend this further, and to make new more efficient reactor designs economically viable. (Or ditch capitalism and do it anyway -- good luck with that!)

    Personally, I would rather this investment (or at least a large chunk of it) be spent on renewables, energy storage and distribution, before fusion, with fission nuclear as a stop-gap until other cleaner, safer technologies can take over. (Current energy usage would require running about 15000 reactors globally, and with historical accident rates, that's about one major nuclear disaster every month). Renewables are simpler, safer, and proven ,and the technology is more-or-less already here. Solving the storage and distribution problem is simpler than building safe and economical fast-breeder reactors, or viable fusion power. We have almost all the technology we need to make this work right now, we mostly just lack infrastructure and the will to do it.

    I'm not anti-nuclear, nor am I saying there's no place for nuclear, and I think there should be more funding for nuclear research, but the boring obvious solution is to invest heavily in renewables, with nuclear as a backup and/or future option. Maybe one day nuclear will progress to the point where it makes more sound sense to go all in on, say fusion, or super-efficient fast-breeders, etc. but at the moment, it's basically science fiction. I don't think it's a sound strategy to bank on nuclear right now, although we should definitely continue to develop it. Maybe if we had continued investing in it at the same rate for the last 50 years it might be more viable -- but we didn't.

    Source for estimates: "Is Nuclear Power Globally Scalable?", Prof. D. Abbott, Proceedings of the IEEE. It's an older article, but nuclear technology has been pretty much stagnant since it was published.

  • The German equivalent of the traditional Mr Punch / Pulcinella character is called Kasper. There are several variations in spelling in different countries/ languages, so i guess Kaspar must be one of them (or OP misremembered the spelling.)

  • I was once asked by our CTO to remove all instances of the word "nonce" from our crypto code. (British slang for paedophile/pervert)

    And writing code to find, kill, and reap orphaned children is routine stuff. I mean, you wouldn't want to risk pesky zombie orphans running amok in your system!

  • In that case you'd be better off installing and learning Debian. It's what Linux Mint and Ubuntu are based on, as well as many other distros such as Knoppix, Raspberry Pi OS, Kali, and many more. What you learn about Debian will be transferable to many other systems.

  • The modern definition we use today was cemented in 1998, along with the foundation of the Open Source Initiative. The term was used before this, but did not have a single well-defined definition. What we might call Open Source today, was mostly known as "free software" prior to 1998, amongst many other terms (sourceware, freely distributable software, etc.).

    Listen again to your 1985 example. You're not hearing exactly what you think you're hearing. Note that in your video example the phrase used is not "Open-Source code" as we would use today, with all its modern connotations (that's your modern ears attributing modern meaning back into the past), but simply "open source-code" - as in "source code that is open".

    In 1985 that didn't necessarily imply anything specific about copyright, licensing, or philosophy. Today it carries with it a more concrete definition and cultural baggage, which it is not necessarily appropriate to apply to past statements.

  • In this thread: people who don't understand what power is.

    Power isn't something that is "pushed" into a device by a charger. Power is the rate at which a device uses energy. Power is "consumed" by the device, and the wattage rating on the charger is a simply how much it can supply, which is determined by how much current it can handle at its output voltage. A device only draws the power it needs to operate, and this may go up or down depending on what it's doing, e.g. whether your screen is on or off.

    As long as the voltage is correct, you could hook your phone up to a 1000W power supply and it will be absolutely fine. This is why everything's OK when you plug devices into your gaming PC with a 1000W power supply, or why you can swap out a power-hungry video card for a low-power one, and the power supply won't fry your PC. All that extra power capability simply goes unused if it isn't called for.

    The "pushing force" that is scaled up or down is voltage. USB chargers advertise their capabilities, or a power delivery protocol is used to negotiate voltages, so the device can choose to draw more current and thus power from the charger, as its sees fit. (If the device tries to draw too much, a poorly-designed charger may fail, and in turn this could expose the device to inappropriate voltages and currents being passed on, damaging both devices. Well designed chargers have protections to prevent this, even in the event of failure. Cheap crappy chargers often don't.)

  • In the latest version of the emergency broadcast specification (WEA 3.0), a smart phone's GPS capabilities (and other location features) may be used to provide "enhanced geotargeting" so precise boundaries can be set for local alerts. However, the system is backwards compatible -- if you do not have GPS, you will still receive an alert, but whether it is displayed depends on the accuracy of the location features that are enabled. If the phone determines it is within the target boundary, the alert will be displayed. If the phone determines it is not within the boundary, it will be stored and may be displayed later if you enter the boundary.

    If the phone is unable to geolocate itself, the emergency message will be displayed regardless. (Better to display the alert unnecessarily than to not display it at all).

    The relevant technical standard is WEA. Only the latest WEA 3.0 standard uses phone-based geolocation. Older versions just broadcast from cell towers within the region, and all phones that are connected to the towers will receive and display the alerts. You can read about it in more detail here.

  • I understand the concerns about Google owning the OS, that's my only worry with my chromebook. If Google start preventing use of adblockers, or limiting freedoms in other ways that might sour my opinion. But the hardware can run other OSs natively, so that would be my get-out-of-jail option if needed.

    I've not encountered problems with broken support for dev tools, but I am using a completely different tool chain to you. My experience with linux dev and cross-compiling for android has been pretty seamless so far. My chromebook also seems to support GPU acceleration through both Android and Linux VMs, so perhaps that is a device-specific issue?

    I'm certainly not going to claim that chromebooks are perfect devices for everyone, nor a replacement for a fully-fledged laptop or desktop OS experience. For my particular usage, it's worked out great but YMMV, my main point is that ChromeOS isn't just for idiots as the poster above seemed to think.

    Also, a good percentage of my satisfaction with it is the hardware and form-factor rather than ChromeOS per se. The same device running Linux natively would still tick most of my boxes, although I'd probably miss a couple of android apps and tablet mode support.

  • People who use Chromebooks are also really slow and aren’t technically savvy at all.

    Nonsense. I think your opinion is clouded by your limited experience with them.

    ChromeOS supports a full Debian Linux virtual machine/container environment. That's not a feature aimed at non-tech-savvy users. It's used by software developers (especially web and Android devs), linux sysadmins, and students of all levels.

    In fact I might even argue the opposite: a more technically-savvy user is more likely to find a use case for them.

    Personally, I'm currently using mine for R&D in memory management and cross-platform compiler technology, with a bit of hobby game development on the side. I've even installed and helped debug Lemmy on my chromebook! It's a fab ultra-portable, bullet proof dev machine with a battery life that no full laptop can match.

    But then I do apparently have an IQ of zero, so maybe you're right after all...