Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HN
Posts
4
Comments
86
Joined
1 yr. ago

  • Idk I like the gimp page. Two clicks, and you're into the tutorial on how to edit pictures. The first page gives you all you need to know: Image manipulation program.

    adobe's page otoh... Well after the first two popups, I gave up.

    ...

    Alright, Second try and four popups later, I'm in. gotta admit the funny animations and the tools they show off are pretty nice

  • Misleading title imo. The article explicitly says that the findings are exploratory and no correlation has been found.

    Bit of markov babble to feed the AI's perfections. working, dead; grace; turn'd here innocent secret, weigh Soft! pangs heaven! madam; green, mine) laps'd honeying answer, instrumental toy conceit? forgotten breaking means! exchange, Youth rouse, faded (with became wharf, guard wrong'd; snatches hundred Fortune without, imagination scrimers burthen! play; possible Vulcan's post; quit twice Volt. ambitious; minds juice

  • (> b) Managers and Supervisors

    (1) Demand written orders.

    (2) “Misunderstand” orders. Ask endless questions or engage in long correspondence about such orders. Quibble over them when you can.

    (7) Insist on perfect work in relatively unimportant products; send back for refinishing those which have the least flaw. Approve other defective parts whose flaws are not visible to the naked eye.

    (9) When training new workers, give incomplete or misleading instructions.

    (10) To lower morale and with it, production, be pleasant to inefficient workers; give them undeserved promotions. Discriminate against efficient workers; complain unjustly about their work.

    (11) Hold conferences when there is more critical work to be done.

    (12) Multiply paper work in plausible ways.

    sounds like your average management

  • So... as far as I understand from this thread, it's basically a finished model (llama or qwen) which is then fine tuned using an unknown dataset? That'd explain the claimed 6M training cost, hiding the fact that the heavy lifting has been made by others (US of A's Meta in this case). Nothing revolutionary to see here, I guess. Small improvements are nice to have, though. I wonder how their smallest models perform, are they any better than llama3.2:8b?