Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)JE
Posts
0
Comments
428
Joined
2 yr. ago

  • DISPENSING MACHINE: Hello. How can I help you? CAT: Fish! DISPENSING MACHINE: Today’s fish is trout a la creme. (Produces a dish.) Enjoy your meal. CAT: Fish! DISPENSING MACHINE: Today’s fish is trout a la creme. (Produces a dish.) Enjoy your meal. CAT: Fish! DISPENSING MACHINE: Today’s fish is trout a la creme. (Produces a dish.) Enjoy your meal. CAT: Fish! DISPENSING MACHINE: Today’s fish is trout a la creme. (Produces a dish.) Enjoy your meal. CAT: Fish! DISPENSING MACHINE: Today’s fish is trout a la creme. (Produces a dish.) Enjoy your meal. CAT: Fish! DISPENSING MACHINE: Today’s fish is trout a la creme. (Produces a dish.) Enjoy your meal. CAT: I will!

  • Sort of. We know 'how it works' to the extent that it was engineered with a particular method and purpose. The problem is that it's incredibly difficult to gain any insight into what's 'inside' the network once the data has been propagated through it.

    Visualizing a neural network can look a little bit like a constellation of stars. Each star is a node and is connected to other nodes. When given an input, each node makes a small calculation and passes the result to the other nodes they are connected to. The calculation is modified by the connection (by what is called a weight), and the results of the calculations change the weights of the connections. That's what's in the black box.

    The constellations in an LLM are very large (the first L in LLM). Each 'layer' may have hundreds of nodes, each of which is connected to every node of the next layer. If there are 100 nodes in two adjacent layers, that makes 10,000 connections. There are many layers in an LLM.

    Notice that I didn't mention anything about the nodes or the connections storing any data. That's because they don't, at least in the sense that we're used to thinking about it. There doesn't exist a string of text that says 'Bill Burr's SSN is ###-##-####'. It's just the nodes that do the calculations, and the weights of their connections.

    So by now you can probably see why it's so tricky to determine what's 'inside' a neural network, because really it's a set of operations instead of a set of data. The most reliable way to see what it does (so far) is to put something in and see what comes out.

  • You're not presenting facts and facts alone, you're presenting facts with your opinions mixed in.

    Yes, that's because they aren't doing any reporting. It's commentary. That's the point of it. The actual news they're commentating is this article by the BBC. It's Hackaday, not Reuters.

    I suspect our schools are failing us if we don't even know how journalism works.

  • Their views have proven quite unpopular. To quote David Frum:

    "If conservatives become convinced that they cannot win democratically, they will not abandon conservatism. They will reject democracy."

    And so they have.

  • Nah bro you're supposed to say

    "I'm so sick of these people who won't stop thinking or talking about people people who won't stop thinking or talking about people who won't stop thinking or talking about annoying guy" say people who won't stop thinking or talking about people people who are talking about people who won't stop thinking or talking about annoying guy.

  • "I'm so sick of these people who won't stop thinking or talking about people who won't stop thinking or talking about annoying guy" say people who are talking about people who won't stop thinking or talking about annoying guy.