It's not out of the question that we get emergent behaviour where the model can connect non-optimally mapped tokens and still translate them correctly, yeah.
Still, this does not quite address the issue of tokenization making it difficult for most models to accurately distinguish between the hexadecimals here.
Having the model write code to solve an issue and then ask it to execute it is an established technique to circumvent this issue, but all of the model interfaces I know of with this capability are very explicit about when they are making use of this tool.
Is this real? On account of how LLMs tokenize their input, this can actually be a pretty tricky task for them to accomplish. This is also the reason why it's hard for them to count the amount of 'R's in the word 'Strawberry'.
There used to be pockets on women's clothes - or more accurately, you tied them on yourself as they came separately from the clothes - but they fell out of fashion as handbags became the fashion statement that said: look - I'm not poor enough to have to have pockets.
Very dumb, but it is what it is.
What baffles me now is that pockets on women's clothes haven't made a comeback yet. How asleep at the wheel are fashion designers?
As others mentioned it's diminishing returns, but there's still a lot of good innovation going on in the codec space. As an example - the reduction in the amount of space required for h265 compared to h264 is staggering. Codecs are a special form of black magic.
Panama should make sure that there are an ample amount of explosives placed throughout the canal, and convey in clear terms to Trump that these will be triggered if the U.S makes any move that threatens their sovereignty.
Somewhat impressive, but still not quite a threat to my professional career, as it cannot produce reliable software for business use.
It does seem to open up for novices to create 'bespoke software' where they previously would not have been able to, or otherwise unable to justify the time commitment, which is fun. This means more software gets created which otherwise would not have existed, and I like that.
Moving to Kotlin taught me to appreciate the underlying fundamentals in the JVM and the patterns present in Java.
I'd rather not use Java today, though. Kotlin is basically Java but with the best practices enabled by default and the bad parts made impossible at a language level.
Without actually knowing how much constructing the physical buttons cost, I would guess that the real savings are in process optimization - if all you have for the interface is a screen, then you don't need to have the interface design done before constructing the car - you can parallelize these tasks.
Insufficient as far as justifications go, but understandably lucrative.
Anything Turing-complete is a powerful tool, but the reason people are reacting negatively is because of how much of the wrong tool it is.
Does an excel-based solution offer adequate runtime performance? No
Does an excel-based solution offer adequate write concurrency? No
Does an excel-based solution offer appropriate data durability guarantees? No
Basically the only saving grace of Excel-based solutions is that they are built in tools that finance workers comprehend, and that is quite simply not enough. To base systems at this scale on Excel is criminally negligent.
It's not out of the question that we get emergent behaviour where the model can connect non-optimally mapped tokens and still translate them correctly, yeah.