For those wondering why this is downvoted 192.168.X.X are local ips. Meaning on local connections use that IP, and is not available to the wider world to use.
You still haven't answered, why save a private business? Why bail out whatever international private equity by buying Tim's from them? Why would that every be worth tax payer money?
I said AI isn't close in education. That was my entire claim
I never said anything about any other company. I said AI in education isn't happening soon. You keep pulling in other sectors.
I've also had several comments in this thread before you came in saying that.
EDIT: give me a citation that LLMs can reason for code. Because in my experience as someone that professionally codes with AI (copilot) it's not capable at that. It's guess what it thinks I want to write in small segments.
Espcially when it has a nasty habit of leaking secrets.
EDIT2 forgot to say why I'm ignoring other fields. Because we're not talking about AI in those fields. We're talking education and search engines at best. My original comment was that AI generated educational papers still serve their original purpose.
What the fuck does that have to do with anything to do with plaintair?
My larger point, AI replacing teachers is at least a decade away.
You've given no evidence that it is. You've just said you hate my sources, while not actually making a single argument that it is.
You said well it stores context, but who cares? I showed that it doesn't translate to what you think, and you said you don't like, without providing any evidence that it means anything beyond looking good on a graph.
I've said several times, SHOW ME ITS CLOSE. I don't care what law enforcement buys, because that has nothing to do with education.
As apposed to the nothing you've cited that context tokens actually improve reasoning?
I love how you keep going further and further away from the education topic at hand, and now brining in police survalinece, which everyone knows is 100% accurate.
5 Conclusion
In this study, we investigate the capacity of LLMs,
with parameters varying from 7B to 200B, to com-
prehend logical rules. The observed performance
disparity between smaller and larger models indi-
cates that size alone does not guarantee a profound
understanding of logical constructs. While larger
models may show traces of semantic learning, their
outputs often lack logical validity when faced with
swapped logical predicates. Our findings suggest
that while LLMs may improve their logical reason-
ing performance through in-context learning and
methodologies such as COT, these enhancements
do not equate to a genuine understanding of logical
operations and definitions, nor do they necessarily
confer the capability for logical reasoning.
So you say I should be intellectually honest by doing the experiment myself, then say that my experiment is going to be shit anyways? Sure... That's also intellectually honest.
Here's the thing.
My education is in physics, not CS. I know enough to know what I try isn't going to be really valid.
But unless you have peer reviewed searches to show otherwise, because I would take your home grown experiment to be as valid as mine.
I'm finding most of what I'd ask on SO can be asked on the tools GitHub issues. If a product doesn't offer a support form or GitHub issues it doesn't get used for me.
If you assume the unlimited power needed right now to power Aloha fold at scale of all human education.
We have at best proof of concepts that computers can talk. But LLMs don't have any way of actually knowing anything behind them. That's kinda the problem.
And it's not a "we'll figure out the one trick" but more fundamentally how it works doesn't allow for that to happen.
Specialized AI like that is not what most people know as AI. Most people reffer to it as LLMs.
Specialized AI, like that showcased, is still decades away from generalized creative thinking. You can't ask it to do a science experiment with in a class because it just can't. It's only built for math proof.
Again, my argument is that it won't never exist.
Just that it's so far off it'd be like trying to regulate smart phone laws in the 90s. We would have only had pipe dreams as to what the tech could be, never mind its broader social context.
So tall to me when it can, in the case of this thread, clinically validated ways of teaching. We're still decades from that.
If you read, it's capable of very little under the surface of what it is.
Show me one that is well studied, like clinical trial levels, then we'll talk.
We're decades away at this point.
My overall point of it's just as meaningless to talk about now as it was in the 90s. Because we can't convince of what a functioning product will be, never mind it's context I'm a greater society. When we have it, we can discuss it then as we have something tangible to discuss. But where we'll be in decades is hard to regulate now.
Yes, but there's a lot of people that lurk to learn in these forums. So I just wanted to explain it to them.