Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MS
Posts
0
Comments
1,157
Joined
2 yr. ago

  • The community notes appear many hours after the original has been posted, meaning that majority of interactions will be before any notes will be attached. And even the process of choosing the community notes is not transparent, so you can never be sure that the note does its job even after it'd added.

  • llvm, clang are packages I give 0 fucks about, but take a significant part of my updates. I never really got around to it, but I will try to make them binary downloads instead of building that shit. Like I understand gcc, but have 0 interest in llvm, and can't have firefox without it.. smh

  • The current drive behind AI is not progress, it's locking knowledge behind a paywall.

    As soon as one company perfects their AI, it will draw everyone to use it, marketing it as 'time saver' so you don't have to do anything (including browsing the web, which is in decline even now). Just ask and you shall receive everything.

    Once everyone gets hooked, and there won't be any competiton left, they will own the population. News, purchase recommendations, learning, everything we do to work on our congitive abilities will be sold through a single vendor.

    Suddenly you own the minds of many people, who can't think for themselves, or search for knowledge on their own... and that's already happening.

    And it's not the progress I was hoping to see in my lifetime.

  • Really depends on many factors. If you have everything in RAM, almost nothing matters.

    If your dataset outgrows the capacity, various things start to matter, based on your workload. Random reads need to have good indices (also writes with unique columns), OLAPs benefit from work_mem, >100M rows will need good partitioning, OLTP may even need some custom solutions if you need to keep a long history, but not for every transaction.

    But even with >B of rows, Postgres can handle it with relative ease, if you know what you're doing. Usually even on a hardware you would consider absolutely inadequate (last year I migrated our company DB from MySQL to Postgres, and with even more data and more complex workflows we downsized our RAM by more than half).