I'm running ollama in termux on a Samsung Galaxy A35 with 8GB of RAM (+8GB of swap, which is useless for AI), and the Ollama app. Models up to 3GB work reasonably fine on just the CPU.
Serendipity is a side effect of the temperature setting. LLMs randomly jump between related concepts, which exposes stuff you might, or might not, have thought about by yourself. It isn't 100% spontaneous, but on average it ends up working "more than nothing". Between that and bouncing ideas off it, they have a use.
With 12GB RAM, you might be able to load models up to 7GB or so... but without tensor acceleration, they'll likely be pretty sluggish. 3GB CoT models already take a while to go through their paces on just the CPU.
Suing for copyright infringement, requires money, both for lawyers and proceedings.
Small artists don't have that money. Large artists do, small ones don't, so more often than not they end up watching as their copyright is being abused without being able to do anything about it.
To get any money, small artists generally sign off their rights, either directly to clients or studios (work for hire), or to publishers... who do have the money to enforce the copyright, but pay peanuts to the artist... when they even pay anything. A typical publishing contract has an advance payment, a marketing provision... then any copyright payments go first to pay off the "investment" by the publisher, and only then they give a certain (rather small) percentage to the artist. Small artists rarely reach the payment threshold.
Best case scenario, small artists get defended by default by some "artists, editors, and publishers" association... which is like putting wolves in charge of sheep. The associations routinely charge for copyrighted material usage... then don't know whom to pay out, because not every small artist is a member, so they just pocket it, often using it to subsidize publishers.
Copyright laws, as of right now, primarily benefit publisher dynasties (like Disney), then large publishers, then large studios and large artists, a few lucky small artists... and leave most small artists SOL.
It's still "world-changing great". All the knowledge sharing, all the collaboration, all the scientific advances, have been growing at the same rate as the "commoners" have been joining it and getting trapped in the slop.
The only change, is the Internet is not just for nerds anymore, it's also for preachers, scammers, and the average brainwashed populace.
It used to be easy to ignore the peasants from inside an ivory tower's echo chamber. The Internet has brought those voices out for everyone to hear... and to realize humanity is not as idealized as they thought. Time to put some real work into fixing some real problems.
Looks decent. I think you could easily improve the tarp on top with some random "weathering", and a light touch from a heat gun on the corners, to get closer to the original look (just don't set it on fire 😉).
Component TDP are an average; actual power usage can go way down, but also spike above spec for short bursts
Brownout is one of the messiest issues to troubleshoot
Additional considerations:
Check the PSU rail distribution
If separate rails, make sure the GPU rail can meet GPU's TDP+20%
If multiple rails, each should meet whatever is connected to it +20%
If using HDDs or other start-heavy components, factor in the initial power spike (staggered start is also an option). GPU starting at almost idle should compensate for the overall power requirement, but still factor it into the rail calculations
Therapists are not supposed to bond with their patients. If you find one whom you can stand for half an hour, then take what you can and leave the rest, they're not to be your friend or lover. The fact that chatbots let people fall in love with them, is a huge fail from a therapy point of view.
Bouncing ideas back and forth is a good use though. A good prompt I've seen recently:
I'm having a persistent problem with [x] despite having taken all the necessary countermeasures I could think of. Ask me enough questions about the problem to find a new approach.
If you worry about privacy, you can run an LLM locally, but it won't be fast, and you'd need extra steps to enable search.
You can use local AI as a sort of "private companion". I have a few smaller versions on my smartphone, they aren't as great as the online versions, and run slower... but you decide the system prompt (not the company behind it), and they work just fine to bounce ideas.
NotebookLM is a great tool to interact with large amounts of data. You can bet Google is using every interaction to train their LLMs, everything you say is going to be analyzed, classified, and fed as some form of training, hopefully anonymized (...but have you read their privacy policy? I haven't, "accept"...).
All chatbots are prompted by the company to be somewhat sycophantic so you come back, the cases where they were "too sycophantic", were just a mistake in dialing it too far. Again, can avoid that with your own system prompt... or at least add an initial prompt in config, if you have the option, to somewhat counteract the company's prompt.
If you want serendipity, you can ask a chatbot to be more spontaneous and suggest more random things. They're generally happy to oblige... but the company ones are cut short on anything that could even remotely be considered as "harmful". That includes NSFW, medical, some chemistry and physics, random hypotheticals, and so on.
GOOD Regulation can be good: like, not selling contaminated food to the public.
BAD Regulation can be VERY bad: like, requiring hospitals to use middlemen who negotiate medication pricing with insurance providers, and whose only goal is to steadily increase the "savings" to insurance by increasing the "prices", then requiring hospitals to "forgive" most of it to the insurance, while people without insurance get a bill for 1,000,000% the real cost.
Unfortunately, the US has seen bipartisan support for the latter kind, and recently has been slashing the former.
Hassan Abedini, deputy political head of Iran’s state broadcaster, said Iran had evacuated the three sites some time ago.
“The enriched uranium reserves had been transferred from the nuclear centres and there are no materials left there that, if targeted, would cause radiation and be harmful to our compatriots,” he told the channel.
The International Atomic Energy Agency also said Sunday morning it had detected “no increase in off-site radiation levels.”
Back in the day, I started migrating notepad stuff to Markdown on a Wiki. Then on a MediaWiki. Then DokuWiki. Then ZimWiki. Then Joplin. Then GitHub Pages and a self-hosted Jeckyll.
Each, single, one, of, them, uses a slightly different flavor of Markdown. At this point, I have stuff spread over ALL OF THEM, much of it rotting away in "backups to migrate later". 😮💨
I've been considering "vibe coding" some converters...
As for syncing... the Markdown part is easy: git.
Working with a Markdown editor to update GH Pages, was a good experience.
Having ZimWiki auto-sync to git, was good, but didn't find a decent compatible editor for Android.
I switched to Joplin lured by the built-in auto-sync options, but kind of regret it now, when it has a folder with thousands of files in it.
Obsidian is not OSS itself, but has an OSS plugin to sync to git.
I've read that using Logseq alongside Obsidian should be possible... and was planning to test that setup, keeping Obsidian in charge of sync. Possibly with GitHub/Jeckyll, git-lfs for images and attachments.
PS: assuming one could have working back-and-forth converters for the different Markdown flavors, and everything stored in git, then one could theoretically use git hooks to convert to/from whatever local version used by a particular editor.
The Play Store has become way more restrictive, they've purged tons of old and/or "inactive" apps... including some I happened to have bought some time ago.
I'm running ollama in termux on a Samsung Galaxy A35 with 8GB of RAM (+8GB of swap, which is useless for AI), and the Ollama app. Models up to 3GB work reasonably fine on just the CPU.
Serendipity is a side effect of the temperature setting. LLMs randomly jump between related concepts, which exposes stuff you might, or might not, have thought about by yourself. It isn't 100% spontaneous, but on average it ends up working "more than nothing". Between that and bouncing ideas off it, they have a use.
With 12GB RAM, you might be able to load models up to 7GB or so... but without tensor acceleration, they'll likely be pretty sluggish. 3GB CoT models already take a while to go through their paces on just the CPU.