Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AB
Posts
8
Comments
1,561
Joined
2 yr. ago

  • “Currently, there is no consensus on the face of the Democrat Party, as a majority of voters either give the title to AOC (26%) or simply say there is none (26%),” Co/efficient concluded.

    Never heard of Co/efficient, but “Democrat Party” is a bit of a red flag. From mediabiasfactcheck:

    FiveThirtyEight, an expert on measuring and rating pollster performance, has evaluated 20 polls by co/efficient, earning 0.7 stars for accuracy, indicating they are Mixed Factual by MBFC’s criteria. They also conclude that their polling moderately favors the Right with a score of -2.7, which equates to a Right-Center polling bias. In general, co/efficient is considered moderately accurate and demonstrates a right-leaning bias in polling.

  • I think the general rule (that also applies on one-way streets, etc.) is that the pedestrian lane closest to traffic should face in the direction of oncoming traffic, so cars aren’t approaching from their blind spot.

  • The basic idea behind the researchers' data compression algorithm is that if an LLM knows what a user will be writing, it does not need to transmit any data, but can simply generate what the user wants them to transmit on the other end

    Great... but if that’s the case, maybe the user should reconsider the usefulness of transmitting that data in the first place.

  • You only know the total mass, charge, and angular momentum of the black hole—you don’t know how those properties are distributed inside the event horizon. You see the apple approach the horizon and the horizon expands to encompass the apple-black hole system, but that information isn’t coming from the singularity at the center—it’s coming from the horizon.

  • AlphaEvolve verifies, runs and scores the proposed programs using automated evaluation metrics. These metrics provide an objective, quantifiable assessment of each solution’s accuracy and quality.

    Yeah, that’s the way genetic algorithms have worked for decades. Have they figured out a way to turn those evaluation metrics directly into code improvements, or do they just keep doing a bunch of rounds of trial and error?