The OpenTF Manifesto
jadero @ jadero @programming.dev Posts 1Comments 146Joined 2 yr. ago
It certainly does, but sometimes desperate circumstances call for desperate actions. Or at least that's how I justify my desperation code. :)
In elementary school, I learned that the round numbers ended with 0. As I progressed, I came to realize that this was equivalent to saying that round numbers are integer-multiples of 10.
Now that you're asking the question, I would generalize that, so that round numbers are multiples of the base.
In binary (converted to decimal), that would be 2, 4, 6, 8, ...
In octal (converted to decimal)l, that would be 8, 16, 24, 32, ...
... and so on.
I also have no problem with negative round numbers.
It strikes me that 0 seems to be a canonical round number in that it's a round number regardless of base.
I wouldn't object if you were to say that round numbers are integer powers of the base (10, 100, 1000, ... for decimal). If your definition doesn't include 0, then I'll expect a good explanation for why not.
But, truth be told, I could learn to live with any definition I can wrap my head around, as long as I can use my elementary school definition in polite company. :)
Hah! Yes, I was aware of that. I only hope that should I be so afflicted that that still applies when using some of those words in the gloriously flexible ways they are capable of. :)
I'm interested in this primarily as an English teacher. I need to be able to spot the linguistic tics and errors and recognize where it likely came from.
That might well turn out to be the Red Queen's Race. It's only a guess, but I suspect that competitive models, the advances resulting from competition, and the advances and experimentation associated with catching and correcting mistakes will mean that you'll generally be playing catch up.
Frankly, I don't even have anything more useful to offer than the unrealistic suggestion that all such work be performed in class using locked down word processing appliances or in longhand. It may be that the days of assigning unsupervised schoolwork are over.
Okay, now I get it. That is pretty close to how I imagine it, too. That is part of why I think these LLMs may give insight into cognition more generally.
I had never thought of that while reading books and articles that describe and investigate the errors we make, especially when there is some kind of brain damage. But I feel like I've seen all these errors described in humans by Oliver Sacks et al.
I agree that consciousness is a sensitive issue. I haven't refined my thinking on it far enough to really argue my position, but I suspect that that it's just one more aspect of the "mind of the gaps". As with the various "god of the gaps" creationist arguments, I think that consciousness will end up falling into that same dead end. That is, we'll get far enough to start feeling comfortable with the idea that gaps are only gaps in the record or our understanding, not failures of theory.
Some current discussion of the matter is already starting to set up the relevant boundaries. We have ourselves as conscious beings. Over time we've come to accept that those with mental and intellectual disabilities are conscious. Some attempts to properly define consciousness leave us no choice but to conclude that consciousness is like intelligence in that there are degrees of consciousness. That, in turn, opens the door to the possibility of consciousness in everything from crows and octopuses to butterflies and earthworms to bacteria and even plants.
I find it particularly interesting that the "degrees of consciousness" map pretty nicely to the "degrees of intelligence".
So if you were to ask me today if my old Fidelity chess computer was conscious, I'd say "to a low degree". Not because I claim any kind of special knowledge, but because I'd be willing to bet a small amount of money that we'll get to the point where the question can actually be answered with confidence and that the answer would likely be "to a low degree".
To your discussion of the neural correlates of consciousness, my opinion is that making the claim that this still tells us nothing about "what material experiences" is a step into the "mind of the gaps". I'm happy enough to have those correlates as evidence that information processing and consciousness cannot be kept separate.
I don't really follow you. I'm not able to make the leap from the methods of floating point math to construction of sentences. There is a sense in which I understand what you've written and another sense in which I feel like there was one more step on the staircase than I realized :)
Thanks, I'll check it out.
Thanks, I didn't know that. I guess I need to broaden my reading.
I like your comment regarding the (usually) thoughtful effort that goes into creative endeavours. I know that there are those who claim that deliberate effort is antithetical to the creative process, but even serendipitous results have to be deliberately examined and refined. Until a system can say "oh, that's interesting enough to investigate further" I'm not convinced that it can be called creative. In the context of LLMs, I think that means giving them access to their own outputs in some way.
As for the dangers, I'm pretty sure that most of us, even those of us looking for danger, will not recognize it until we see it. That doesn't mean we should just barrel ahead, though. Just the opposite. That's why we need to move slowly. Our reflexes and analytical capabilities are pretty slow in comparison to the potential rate of development.
I wonder how creative these things are. Somewhere between "hallucination" and fully verifiable correct answers based on current knowledge, there might be a "zone of creativity."
I would argue that there is no such thing as something completely from nothing. Every advance builds on work that came before, often combining bodies of knowledge in disparate fields to discover new insights.
That is true, but perhaps inappropriate in this case. Humans are not predictable, nor is weather, the actual outcomes of policy decisions, and any number of things that are critical to a functioning society. We mostly cope with most issues by creating systems that are somewhat resilient, take into account the lack of perfection, and by making adjustments over time to tweak the results.
I think perhaps a better analogy than the oil refinery might be economic or social policy. We have to always be fiddling with inputs and processes to get the results we desire. We never have perfectly predictable outcomes, yet somehow mostly manage to get things approximately correct. This doesn't even ignore the issue that we can't seem to really agree on what "correct" is as we seem to be in general agreement that 1920 was better than 1820 and that 2020 was better than 1920.
If we want AI to be the backbone of industry, then the current state of the art probably isn't suitable and the LLM/transformer systems may never be. But if we want other ways to browse a problem space for potential solutions, then maybe they fit the bill.
I don't know and I suspect we're still a decade away from really being able to tell whether these things are net positive or not. Just one more thing that we have difficulty predicting, so we have to be sure to hedge our bets.
(And I apologize if it seems I've just moved the goal posts. I probably did, but I'm not really sure that I or anyone else really knows enough at this point to really lock them in place.)
There are a few things I've taken from that article on first reading:
- I was substantially correct in my understanding of how multidimensional matrices and neural networks are used. While unsurprising given the amount of reading I've done over the last several decades on various approaches to AI, it's still gratifying to feel that I actually learned something from all that reading.
- I saw nothing in there to argue against my thesis that things like ChatGPT may be doing for intelligence what evolutionary biology has done to creationism. In the case of evolution, it has forced creationists to fall back on a "God of the Gaps" whose gaps grow ever smaller. ChatGPT et al have me thinking that any attribution of mind or intelligence to "mystery" or the supernatural or whatever hand waving is en vogue is or will be consigned to ever smaller gaps. That is, it is incorrect to claim that intelligence, human or otherwise, is currently and will forever remain unexplainable.
- The fact that we cannot easily work out exactly how a particular input was transformed to a particular output strikes me as a "fake problem." That is, given the scale of operations, this difficulty of following a single throughline is no different from many other processes we have developed. Who can say which molecules go where in an oil refinery? We have only a process that is shown useful in the lab then scaled to beyond comprehension in industry. Except that it's not actually beyond comprehension, because everything we need to know is described by the process, validated at small scales, and producing statistically similar useful results at large scales. Asking questions about individual molecules is asking the wrong questions. So it is with LLM and transformers: the "how it works" is in being able to describe and validate the process, not in being able to track and understand individual changes between input and output at scale.
- Although not explicitly addressed, the "hallucinatory" results we occasionally see may have more in common with the ordinary cognitive failures we are all subject to than anything that can be labelled as broken. Each of us has in our backgrounds something that got misclassified in ways that, when combined with the way we process information, lead to wild conclusions. That is why we have learned to compare and contrast our results with the results of others and have even formalized that activity in science. So it may be necessary to apply that activity (compare and contrast) with other systems, including the ones built in to our brains.
Anyway some pseudorandom babbling that I hope is at least as useful as a hallucinating AI.
Bonus: it sounds like a scream of terror.
I disagree to some extent. I could never have had a career without Visual Basic and Access. Now in my retirement, I struggle mightily to put together all the pieces required. You might say that getting me out of the field is a good thing, but without masters creating tools and tutorials for journeymen and women, we will forever need the masters on the front lines instead of leveraging their mastery for more valuable ends.
I have two hypotheses for why some kinds of software grow worse over time. They are not mutually exclusive and, in fact, may both be at work in some cases.
Software has transitioned from merely complex to chaotic. That is, there is so much going on within a piece of software and its interactions with other pieces of software, including the operating system itself, that the mathematics of chaos are often more applicable than logic. In a chaotic system, everything from seemingly trivial differences between two ostensibly identical chips to the order in which software is installed, updated, and executed has an effect on the operating environment, producing unpredictable outcomes. I started thinking about the systems I was using with this in mind sometime in the early 2000s.
The "masters" in the field are not paying enough attention to the "apprentices" and "journeymen. Put another way, there are too many programmers like me left unsupervised. I couldn't have had a successful career without tools like Visual Basic and Access, the masterful documentation and tutorials they came with, and the wisdom to make sure I was never in a position where my software might have more than a dozen users at a time at any one site. Now we have people who don't know enough to use one selection to limit the options for the next selection juggling different software and frameworks trying to work in teams to do the bidding of someone who can barely type. And the end result is supposed to be used by thousands of people on all manner of equipment and network connections.
One reason that open source software seems more reliable is that people like me, even if we think we can contribute, are mostly dissuaded by the very complexity of the process. The few of us who do navigate the system to make a contribution have our offerings carefully scrutinized before acceptance.
Another reason that open source software seems more reliable is that most of it is aimed at those with expertise or desiring expertise. At least in my experience, that cohort is much more tolerant of those things that more casual users find frustrating.
I think that "all" of anything is going to be a dumpster fire no matter what your standards are. Sturgeon said that 90% of everything is crud. What he didn't say was that everyone has their own opinion on which 10% is worth anything.
Or you could put on your hip waders and follow Kipling into the waters looking for the 20% that makes it worthwhile to wade through the other 80%.
(More at https://en.m.wikipedia.org/wiki/Sturgeon%27s_law, including a link to the actual text of what Kipling said.)
And, as always, attempting to code to that spec will expose contradictions, inconsistencies, and frequently produce something that the customer judges as unfit for purpose.
Coding has never been the toughest problem, except in the matter of security attacks.
Excellent point. I've got exactly one SQRL login and even it was just to play with it. I never got as far as real world considerations. :)
Dummy question: does the new license apply retroactively? My limited understanding of licensing is that a new license can affect only those who choose to use the version that includes the new license. Those who don't upgrade are still operating under the old license, aren't they?
I thought that non-retroactive licensing was key to keeping source open, because the user just doesn't upgrade or someone forks and moves on.