Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)WA
Posts
2
Comments
504
Joined
2 yr. ago

  • From what I've heard the drive system is more static than the osprey so supposedly less points of failure.

    Not having to pack itself up on an LSD probably helps.

    We'll have to go by the military's current quality marker: how many randomly drop out of the sky on some unsuspecting japanese person.

  • Key point in the article is this:

    On a visit to the former Tsukiji fish market area in downtown Tokyo, Yuuka Fujikawa from Hokkaido, said she has hardly seen whale meat sold at supermarkets. “I’ve actually never tried it myself,” she said.

    I've never seen it in the supermarket and I've only seen whale meat at a restaurant once (whale bacon, it was called, and it was literally fishy bacon. I can see why nobody is fucking buying that shit)

    The industry is dying (rightfully) and you have a bunch of people trying to keep it afloat.

  • Facebook is trying to burn the forest around OpenAI and other closed models by removing the market for "models" by themselves, by releasing their own freely to the community. A lot of money is already pivoting away towards companies trying to find products that use the AI instead of the AI itself. Unless OpenAI pivots to something more substantial than just providing multimodal prompt completion they're gonna find themselves without a lot of runway left.

  • I was never able to get appreciably better results from 11 labs than using some (minorly) trained RVC model :/ The long scripts problem is something pretty much any text-to-something model suffers from. The longer the context the lower the cohesion ends up.

    I do rotoscoping with SDXL i2i and controlnet posing together. Without I found it tends to smear. Do you just do image2image?

  • This isn't really accurate either. At the moment of generation, an LLM only has context for the input string and the network of text tokens it's been assigned. It pulls from a "pool" of these tokens based on what it's already output and the input context, nothing more.

    Most LLMs have what are called "Top P", "Top K" etc, these are the number of tokens that it ends up selecting from based on the previous token, alongside the input tokens. It then randomly chooses one based on temperature settings.

    It's why if you turn these models' temperature settings really high they output pure nonsense both conceptually and grammatically, because the tenuous thread linking the previous token's context to the next token has been widened enough that it completely loses any semblance of cohesiveness.

  • I still think hydrogen cells are a better long-term solution than BEVs. It's way ahead of it's time considering tbe infrastructure requirements, but I assume at some point after everyone is using BEVs and realizing the comparative downsides it'll see a revival.