Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PI
Posts
0
Comments
455
Joined
2 yr. ago

  • That's actually the inverse of what ranked choice does.

    Ranked choice fulfills "later-no-harm"; filling out a third choice can never hurt your second or first choice.

    Because of that, it fails "favorite betrayal"; there are times when you get a worse outcome by voting for your honest favorite.

    That's mostly because ranked choice doesn't consider your second or third picks until your first and second have been eliminated. So there's a bunch of weird edgecases where a compromise candidate with enough second, third etc. votes to win in the final round gets eliminated early on before they actually get any second, third etc. place support.

    Suppose there's an election like that where the Liberal is the compromise candidate that could beat either the NDP or Conservative candidate in the final round, but because the NDP and Conservative get more first-place votes, the election goes Conservative. Depending on the particulars, NDP voters could potentially have elected the Liberal by staying home, or even by voting Conservative. Either way, they'd have been better off strategically voting for the Liberal than voting honestly for the NDP.

    In general, voting honestly in ranked choice is only safe either if you're voting for a fringe third party that could never win or if you're voting for one of the two candidates with the most total popularity.

  • Plurality voting only really works well in two candidate elections. In three candidate elections, you start to frequently run into problems with spoilers.

    'Social utility efficiency' is a mathematical measure of how happy people are with the results of stimulated elections. Plurality scales far worse than any other reasonable method.

    Because of that, the best performing third parties in countries that use plurality are regional ones. You'll have local elections where one of the national major parties is functionally a third party.

    Trying to oppose the two party system by just voting third party is about as effective as trying to end car dependency by just biking down major stroads. Without changing the underlying environment (e.g. switching to a better voting system or building a protected bike lane), most people won't follow you.

  • It's also a tiny part of the market, because it's literally 25x the cost of turning methane into hydrogen.

    Current electrolyzers are also much less energy efficient than batteries. It's not an unsolvable problem, but battery tech is currently much more affordable than green hydrogen infrastructure. And an electrified third rail is a much better idea for trains.

  • But I guess non-action and bootlicking while we wait for our thoroughly bribed politicians to do nothing is better.

    Nation-wide action, of course, is best. Something like the green new deal or even a market-based solution like cap-and-trade or a carbon tax.

    On a local level, though, there's a lot of action that can be done.

    Nation-wide, the biggest category of carbon emissions is transportation, at 28% of all emissions. Over half of all transportation-related emissions are from cars and trucks.

    The amount people drive is closely tied to local urban design, which comes down largely to local zoning regulations and infrastructure design. Those are primarily impacted by the people who show up at town meetings and vote.

    Advocate for walkable, mixed-use zoning, improved bike infrastructure, etc. Most people aren't "drivers", "cyclists" or "public transit riders", they're people who want to get from point A to point B as easily as possible and will take whatever is best.

  • A large number of gas stations are franchises. Breaking the LCD screens hurts the local franchise owner, not whichever fossil fuel company they're working with.

    More to the point, breaking LCD screens accomplishes absolutely nothing. Most people don't drive because they love driving, they drive because of zoning, sprawl and a lack of reasonable alternatives. If you get rid of fossil fuel infrastructure without fixing the underlying car dependency, they'll be stuck at home.

  • Efficiency in economics has a particular technical definition.

    Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off

    Free markets are great at producing outcomes that are efficient in a particular technical sense, but not especially equitable.

  • If you're on propane, it's more likely to be cheaper. Particularly over the course of an entire heating season, because they're more efficient in fall and spring than the coldest part of winter.

    But yeah, this study wasn't looking at cost per therm but just raw COP, which is a pointless metric. It doesn't even compare the number of watts of heat from burning natural gas in a furnace vs in a modern power plant that supplies a heat pump. Although since we don't have a carbon tax, that's only a theoretically interesting comparison.

    Heat pumps work fine for most people in the north. Mitsubishi's cold climate heat pumps supply 85% of their rated heat at -13F. Buffalo is a city known for its winters, and the last time Buffalo's lowest temperature was below that was 1982. They're just going to be a more expensive option for most people right now.

  • What kind of runtime tag corresponds to generics, exactly?

    Python handles generics essentially the same way that Java 1.0 handles generics: it just kinda punts on it. In Java 1.0, list is a heterogenous collection of Objects. Object, in Java 1.0, allows you to write polymorphic code. But it's not really the same sort of thing; that's why they added generics.

    It's compile-time that has limitations on what it can do, runtime has none.

    Ish.

    There's typing a la Curry, where semantics don't depend on static types and typing a la Church, where semantics can depend on static types.

    Haskell's typeclasses, Scala's implicits and Rust's traits are a great example of something that inherently requires Church style typing.

    One of the nice things typeclasses let you do is to write functions that are polymorphic on the return type, and the language will automagically pick the right one based on type inference. For example, in Haskell, the result of the expression fromInteger 1 depends on type ascribed to it. Use it somewhere that expects a double? It'll generate a double. Use it somewhere you expect a complex number? You'll get a complex number. Use it somewhere you're using an automatic differentiation library? You'll get whatever type that AD library defined.

    That's fundamentally not something you can do in python. You have to go with the manual implementation passing approach, which is incredibly painful so people do it very sparingly.

    More to the point, though, limitations have both costs and befits. There's a reason python doesn't have goto and that strings are immutable, even though those are limitations. The question is always if the costs outweigh the benefits or not.

  • Bob Harper uses 'unityped' in his post about how dynamic typing is a static type system with a single type in disguise. I've literally never heard "monotyped" used as a term in a dynamic context.

    In Types and Programming Languages, Ben Pierce says "Terms like 'dynamically typed' are arguably misnomers and should probably be replaced by 'dynamically checked', but the usage is standard". Generally, you'll see 'tag' used by type theorists to distinguish what dynamic languages are doing from what a static language considers a type.

    Type systems have existed as a field in math for over a century and predate programming languages by decades. They do a slightly different sort of thing vs dynamic checking, and many type system features like generics or algebraic data types make sense in a static context but not in a dynamic one.

  • Basic stuff like maps aren’t easy to implement either.

    This is mostly due to a preference for immutable data structures. That said, the standard library has a balanced tree-based map that's not too complex.

    If you want a good immutable data structure with O(1) find and update, you're unfortunately looking at something like a hash array mapped trie, but that's the same in e.g. clojure or Scala.

  • Haskell started out as an academic language.

    The problem Haskell was trying to solve was that in the late 80s, there was a bunch of interest in lazy functional programming but all the research groups had to write their own lazy language before writing a paper on whatever new feature they were interested in. So they banded together to create Haskell as a common research program that they could collectively use.

    It's been remarkably successful for a research language, and has become more practical over the years in many ways. But it's always had the motto "avoid (success at all costs)".

  • There's a bunch of companies that use Haskell.

    Off the top of my head, Mercury is an online bank built on Haskell, Tsuru capitol is a hedge fund built on it, Standard Chartered bank has a big Haskell team in Singapore, and Facebook's automated rule-based spam detection software is built in Haskell.

    There's also cardano, in the crypto space.

    And various other companies might have a project or two written in Haskell by a small team.

  • Isn't the whole point of dynamic languages that they're monotyped? They're equivalent to a type system with only one type, any. Really, most dynamic languages are equivalent to having a single tagged union of all the different sorts of values in the language.

    If you add additional types, you get into gradual type systems.

  • From his blog post:

    While you may compile dialects into it, you still have to accept the fact that running code in the browser means running JavaScript. So being able to write that, free of any tooling, and free of any strong typing, is a blessing under the circumstances.

    By his logic, JS linters are bad because they're tooling that restricts your access to all of Javascript. But linters mean you don't have to read PRs with a fine tooth comb to make sure there's no footguns like using == instead of ===.

    Also, you could use that same logic to advocate for writing JVM bytecode directly instead of Java/Kotlin/Scala/Clojure/etc.

    The question is really whether tooling pays its way in terms of lower bug rates, code that's easier for coworkers to read, and code that's easier to reason about.

  • Look at last year's mass shooting in Buffalo, where a racist drove halfway across the state to shoot up a grocery store in a black neighborhood. He shot 12 people, including a "good guy with a gun" that the NRA claims stops attacks like that.

    He had bought his rifle legally in NYS, but went across the border to PA to buy 30 round magazines, which are illegal in NY.

    Having access to 20 more rounds per mag than NY's max certainly didn't help things, but that terrorist attack would probably still have happened if NYs laws were nation wide.

    The problem is both that location-specific gun control is ineffective because you can just go a state/city over, and that passing effective gun control even in a state like NY is almost impossible.

  • Of the 25 dogs identified as pit bull-type dogs by breed signature, 12 were identified by shelter staff as pit bull-type dogs at the time of admission to the shelter (prior to the study visit), including five labeled American Staffordshire terrier mix, four pit bull mix, two pit bull, and one American Staffordshire terrier. During the study, 20/25 dogs were identified by at least one of the four staff assessors as pit bull-type dogs, and five were not identified as pit bull-type dogs by any of the assessors. ...

    Of the 95 dogs (79%) that lacked breed signatures for pit bull heritage breeds, six (6%) were identified by shelter staff as pit bull-type dogs at the time of shelter admission, and 36 (38%) were identified as pit bull-type dogs by at least one shelter staff assessor at the time of the study visit

    So, at intake, 18 dogs were identified as pit bulls but only 2/3rds were at least 12% pit bull.

    During the study, 56 dogs were identified as being pit bulls, but only about 1/3rd were in fact at least 12% pit bull.

    This is the classic 'base rate fallacy'. The false positive rate isn't that high, and the false negative rate isn't that high either. But because the true positive rate is pretty low, the ratio of true positives to false positives is much worse than you'd intuitively think.

    Tests for rare diseases and attempts to behaviorally profile terrorists at airports runs into the same problem. Sometimes, a 99.9% accurate test just moves you from searching for a needle on a farm to a needle in only a single haystack.