Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MP
magic_lobster_party @ magic_lobster_party @kbin.run
Posts
1
Comments
624
Joined
1 yr. ago

  • Recommendation systems are well studied. I don’t think it’s unreasonable to add some form of recommendation layer separate from (or on top of) the content delivery. It doesn’t need to be up to par with YouTube’s, at least before there’s any major content.

    Most YouTubers rely on sponsors or Patreon. Podcasters are doing the same - many of which are self hosting. So I don’t think an ad delivery system is that needed.

    I don’t see how it would have to work much differently compared to how Pocketcast or Overcast already works.

    The first problem is getting content to the platform.

  • I don’t have an answer to your question, but suicide isn’t that simple.

    Bad things can happen to people, and they would never consider suicide. Good things can happen to people, but they still commit suicide.

    I don’t think people always know exactly why they’re suicidal. They might believe it’s because they didn’t get into the dream university or failing exams. It might be a triggering factor, but not the full story.

    I don’t believe there’s a checklist of things to do and not to do. Why a person might end up in suicide is entirely personal.

  • Easy solution: host an FTP with all the videos. It has existed long before YouTube was a thing.

    More advanced solution: Torrent ala Pirate Bay. High quality videos have been distributed this way long before YouTube even supported 1080p. Peertube is based on similar solution as this.

    The main problem is to attract content creators to the platform. The problem isn’t technical.

  • https://www.nature.com/articles/nmeth.4642

    This article use different wording than me, but in essence: Statistics is mostly about using a known model to explain the data. Machine Learning is mostly about finding any model that predicts the data well. Different purposes with some overlap. Some statistical methods are used in Machine Learning, but that doesn’t necessarily mean all of Machine Learning is statistics.

    The boundary between statistical inference and ML is subject to debate—some methods fall squarely into one or the other domain, but many are used in both. […] Statistics requires us to choose a model that incorporates our knowledge of the system, and ML requires us to choose a predictive algorithm by relying on its empirical capabilities.

    Another (potentially lower quality) article that is not from Nature, but discusses the meme in particular:

    https://www.datarobot.com/blog/statistics-and-machine-learning-whats-the-difference/

    Because of the large number of variables in machine learning datasets, the models developed from them can be simultaneously extremely accurate and almost impossible to understand. Statistical models, on the other hand are typically easier to understand because they are based on fewer variables, and the accuracy of relationships is supported by tests of statistical significance.

  • Technically not debut, but Sam Raimi’s Spider-Man was well timed.

    It was shortly after the run of the 90s TV cartoon. VFX had just reached a point where convincing web slinging could be made. A few years earlier it would’ve looked awful.

    I would also say that along with X-Men it started a new era of super hero movies where they could be taken seriously. Compare it to the Batman movies in the 90s, which are goofy in comparison.

  • Only a Sith would deal in absolutes. Same goes in programming. Microservices have their benefits . So do monoliths. Neither is going away in the foreseeable future.

    Safest bet is probably to do monoliths first. Use microservices once it makes sense.

  • If parameters aren’t neatly interpretable then it’s bad statistics. You’ve learned nothing about the general structure of the data.

    Linear regression models are often great tools for explaining the structure of the data. You can directly see which parts of the input are more important for determining the output. You have very little of that when using neural networks with more than 1 hidden layer.

  • When CCP did a controlled eradication of pest animals destroying their crops, it caused the great Chinese famine and millions died. Mostly because these pest animals were natural enemies to even worse pests.

  • That book probably doesn’t go much further than neural networks with 1 hidden layer. Maybe 2 hidden layers at most.

    IMO, statistics is about explaining data. Regression is useful to explain how parameters relate to each others. Statistics that don’t help us understand data isn’t useful statistics.

    Modern machine learning has strayed far away from data explanation. Now it’s common to deal with more than a dozen hidden layers. It might have roots in statistics, but mostly it’s about brute forcing any curve to the data. It doesn’t help us understanding the data better, but at least we have approximated some function.