Is lemmy now what reddit used to be 10+ years ago?
rufus @ rufus @discuss.tchncs.de Posts 12Comments 1,377Joined 2 yr. ago
Yeah, doesn't really work. I mean it has a rough idea of that it needs to go east. And I'm surprised that it knows which interstates are in an area and a few street names in the cities. I'm really surprised. But I told it to get me from Houston to Montgomery as in your example. And in Houston it just tells random street names that aren't even connected and in different parts of the city. Then it drives north on the I-45 and somehow ends up in the south on the I-610-E and finally the I-10-E. But then it makes up some shit, somehow drives to New Orleans, then a bit back and zig-zags it's way back onto the I-10. Then some more instructions I didn't fact check and it gets that it needs to go through Mobile and then north on the I-65.
I've tested ChatGPT on Germany. And it also gets which Autobahn is connected to the next. It still does occasional zig-zags and in between it likes to do an entire loop of 50km (30 miles) that ends up 2 cities back where it came from... Drives east again and on the second try takes a different exit.
However: I'm really surprised by the level of spatial awareness. I wouldn't have expected it to come up with mostly correct cardinal directions and interstates that are actually connected and run through the mentioned cities. And like cities in between.
I don't think I need to try "phi". Small models have very limited knowledge stored inside of them. They're too small to remember lots of things.
So, you were right. Consider me impressed. But I don't think there is a real-world application for this unless your car has a teleporter built in to deal with the inconsistencies.
How do I start a new lemmy?
Is this an honest question?
If yes: Read the info here: https://join-lemmy.org/docs/administration/administration.html
That is the installation guide.
If you're not that tech-savy I recommend using a self-hosting platform like YunoHost or Cosmos.
You have to at least put some effort in and google it and read the instructions yourself. Everyone is invited to run their own instance of Lemmy, and so are you.
You'd need a domain and some sort of server. Any VPS will do or some 24/7 online device at home if you can do port forwards on your home internet connection.
I'd invite you to have a look at it. If you're really interested, feel free to ask follow-up questions.
Regarding your other question: Yes, you can.
Which model(s) did you try? I'm willing to test it later. Downside is, I mainly use smaller LLMs, live in Germany, in an urban region with lots of streets and different Autobahnen and it's kind of a hassle to deal with textual driving instructions anyways. 😆
I think they're using Widevine DRM. And with DRM they can enforce whatever arbitrary policies they like. They set special restrictions for Linux. I think Amazon set 480p as max, Netflix 720p and YouTube 4k or sth like that. AFAIK it has little to do with technology. It's just a number that the specific company sets in their configuration.
Quite some AI questions coming up in selfhosted in the last few days...
Here's some more communities I'm subscribed to:
And a few inactive ones on lemmy.intai.tech
I'm using koboldcpp and ollama. KoboldCpp is really awesome. In terms of hardware it's an old PC with lots of RAM but no graphics card, so it's quite slow for me. I occasionally rent a cloud GPU instance on runpod.io Not doing anything fancy, mainly role play, recreational stuff and I occasionally ask it to give me creative ideas for something, translate something or re-word or draft an unimportant text / email.
Have tried coding, summarizing and other stuff, but the performance of current AI isn't enough for my everyday tasks.
Ah, nice. Thanks for sharing.
Yeah, but usually with open-source software you get like 150 Github comments complaining and outlining their shady business practices... If there's something to complain about.
The XZ disaster is an example for sth else. There are probably more backdoors in proprietary software that we just don't know about. And they can just keep it hidden away and force the manufacturers to do so. No elaborate social engineering like in the XZ case needed... And no software is safe. They all have bugs and most of them depend on third-party libraries. That has nothing to do with being open or closed source. If so, being open provides you with more of a chance to catch mischievous behaviour. At least generally speaking. There will be exceptions to this rule.
I think in the next time it's mostly the unskilled and office jobs. I think we still have a shortage of skilled IT professionals and people who can do more than webdevelopment and write simple python scripts. And we also have a shortage of teachers, kindergarden teachers, people who care for the elderly, doctors, psychologists. And despite AI creeping into all the fields, I still see a career there for quite some time to come. Also I don't see an AI plumber anytime soon coming around and fixing your toilet. So I'd say handyman is a pretty safe bet.
But I'd say all the people making career decisions right now better factor that in. Joining a call center is probably not a sustainable decision any more. And some simple office or management jobs will become redundant soon. I just think big tech laying off IT professionals is more an artificially inflated bubble bursting, than AI being now able to code complex programs or do the job of an engineer.
It's not really a gamble. We know what AI can do. And there are lists with predictions which jobs can be automated. We can base our decisions on that and I've seen articles in the newspapers 10 years ago. They're not 100% accurate but a rough guide... For example we still have a shortage of train operators. And 10 years ago people said driving trains on rails is easy to automate and we shouldn't strive for that career anymore.
It'll likely get there. But by that time society will have changed substantially. We can watch Star Trek if we're talking about a post-scarcity future and all the hard work is done for us. We'd need universal income for that. Or we end up in a dystopia. But I think that's to uncertain to base decisions on.
I don't think you can use Retrieval Augmented Genaration or vector databases for a task like that. At least not if you want to compare the whole papers and not just a single statement or fact. And that'd be what most tools are focused on. As far a I know the tools that are concerned with big PDF libraries are meant to retrieve specific information out of the library. Relevant to a specific question from the user. If your task is to go through the complete texts, it's not the right tool because it's made to only pick out chunks of text.
I'd say you need an LLM with a long context length, like 128k or way more, fit all the texts in and add your question. Or you come up with a clever agent. Make it summarize each paper individually or extract facts, then feed that result back and let it search for contradictions, or do a summary of the summaries.
(And I'm not sure if AI is up to the task anyways. Doing meta-studies is a really complex task, done by highly skilled professionals of a field. And it takes them months... I don't think current AI's performance is anywhere near that level. It's probably going to make something up instead of outputting anything that's related to reality.)
A PC magazine or well established tech blog.
I don't know. In the tv documentary they made it look like fun.
hoerbuch.us if you want German content.
Easy thing. Just pick an animal that you don't need to turn back from. I'd say a search-and-rescue dog is probably having a blast and a good life.
What's that got to do with AI?
Edit: Ah. Probably the search bar from the screenshot.
Depends on your exact question. I still have some analog phones around. But they're connected via an VOIP adapter. And I suppose most calls are converted to internet protocol somewhere on the way anyways. I don't think there are many analog lines and interchanges through the country anymore that'd connect you directly (without conversion) to your grandma.
Because you could use your time to work and make money. Labor is kind of selling your time to other entities and doing their stuff. And even an enterpreneur can use time to grow their business. Or capital alone regularly grows with interest or by investing it.
You could also use your time to make waffles. That'd be oddly specific, but even then you could sell those and you'd end up with money again... And exposing waffles to time just results in them growing mold.
Permanently Deleted
Isn't that very similar to what TikTok does? Just with a different algorithm and maybe other content than just videos?
No. I'd say the whole internet felt different 10+ years ago. Including this, what people are on here and how they behave. And I'd day the average intellect is different. But that could also be me growing up.