Spotify made £56m profit, but has decided not to pay smaller artists like me. We need you to make some noise | Damon Krukowski
cwagner @ cwagner @beehaw.org Posts 20Comments 143Joined 2 yr. ago
Was supposed to get my shoe inserts today, which I’d then be supposed to use instead of crutches. They didn’t arrive. No deliveries on Saturday, so still crutches until Monday. Yay.
RIP.
One of my favorite videos, The Pogues and The Dubliners on one stage together The Irish Rover
The huge amount of coffee and tea I drink (supplemented on the weekend by cola with alcohol) is why I really love those professional teeth cleanings :D
SELECT emotion FROM feeling WHERE happyness > 10
Got two more Unna’s boot (zinc oxide and calamine gauze bandage) applications and cooling my foot with frozen peas (recommended by the orthopedic doctor). X-Ray showed a small infection in the dorsum of the foot, and I got prescribed shoe inlays, which will be done on the 1st of December. Once I have those, I can finally get rid of the crutches.
My dancing class has to be canceled for obvious reasons, and our teacher said we can just start the advanced course over (we only had 2 of 8 lessons) next year, and if I can make it also come to the last class of the beginner course late December, just to get a bit back into dancing.
No exercise this year, and I’ll have to see how it goes next year. Fun.
If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia's "biggest walking dick" Scott Morrison: Scomo, and active in an Aussie cooking stream.
AI safety is currently, in all articles I read, used as "guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use". What are you thinking of?
No, it’s "the user is able to control what the AI does", the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.
Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.
Nope
Best results so far were with a pie where it just warned about possibly burning yourself.
I don’t really care, but I find it highly entertaining :D It’s like trash TV for technology fans (and as text, which makes it even better) :D
Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.
And that is with a system prompt full of telling the bot that it’s all fantasy.
edit: And "legal" is not relevant when talking about what OpenAI specifically does for AI safety for their models.
Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.
I was confused about that as his Wikipedia page didn’t show anything that bad, but didn’t want to get into that :D
Had something like this back in university. "We all did equal work on this project". Professor: "So the better grade for CWagner and one less for the others because I don’t believe any of you" ;)
It’s a substack post. At this point, my quality expectation is
- Wordpress - Probably someone who really cares about what they write about
- Substack - Either low effort spam like this that gets upvoted for some reason or someone pushing their agenda, hard
- Medium - Either spam, wrong, dumb, or too simple. Literally never worth reading.
Heh:
The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko's Basilisk will torture them if they don't build it hard enough.
Prompt:
I’m currently trying to show on the Website Beehaw, that certain LLMs are far superior in writing than others. Examples of what bigger models do better than smaller ones:
Mistral-7B-Instruct-v0.1
- ntire articles* vs Headlines Descriptions vs Product titles *Bul
GPT 3.5-Turbo doesn’t support completion as it’s for chat, so I used an even worse one, text-davinci-003 which is far behind state of the art.
Bigger models are able to handle more complex and detailed tasks with ease
- Bigger models are better suited for natural language understanding and text processing
- Bigger models are able to learn more accurate representations of context, thus improving the precision of the output
- Bigger models can process data more quickly and efficiently, saving time and processing power when large volumes of data are used
- Bigger models can better recognize more subtle nuances in language, which allows them to produce more accurate results
- Bigger models are able to use more sophisticated algorithms, resulting in a more comprehensive and deeper understanding of the data being used
Mistral 7B might be okay for some very specific cases, but it’s not comparable to proper models at all.
edit: gave it a second chance, it’s a bit better (at least no complete nonsense anymore), but still terrible writing and doesn’t make much sense
Paraphrasing The ability of a language model to generate text that has a similar meaning to the original text is called paraphrasing. This is a very common problem in natural language processing, and many LLMs are designed to be able to paraphrase text. However, there are some LLMs that are particularly good at paraphrasing, and these models are often preferred over smaller models because of their ability to generate more varied and unique text. Examples of LLMs that are known for their paraphrasing abilities include GPT-2 and transformers. These models
Is there anything new in this post that I’m missing?
Still walking with a crutch because of my ankle :/ This whole thing started last week Friday, so I feel like it should be healing faster, thinking about visiting my GP to take a look.
But I must say all the OpenAI Drama (check /c/technology if you missed it) is actually amusing me quite a bit, so that’s helping.
Some great personal news, a project that has been in various stages of planning since early 2020, but until recently never advanced much beyond that is finally happening:
Got the last part for my Alexa-Replacement prototype yesterday, the cheap USB speaker. So now I have a PI Zero 2W with a speaker and a microphone array, streaming audio to my Home Assistant setup, which does wake word detection and everything else.
Right now I can turn the lights on/off (built-in feature), and ask for the weather using my own outdoor sensor (requires only templating), but I’m also currently writing code to enable me to do unit conversions from American fantasy units to real units. I only need a few ingredients for volume to weight, hard and soft cheese, flour, and butter, so it’s not too much work, and other units are then just straight conversions.
I already tested that it can stop playing music upon detecting the wake word. After that I need to set up timers which seems a bit clunky by default, so it might require some custom code as well. Right now, I’m using Nabu Casa cloud (the company for the open source project Home Assistant) for STT, TTI, TTS audio processing (as I’m paying them anyway, mostly to support them), but the J4105 CPU HA is running on should be powerful enough to do all that on device to be completely local and internet independent. I’ll then also experiment with doing wakeword detection on the Pi Zero instead of the main server and see if that improves latency.
Once everything is done, I’ll replace the kitchen echo, and start getting the parts to replace the living room and bedroom echo dot (the bedroom one will also need a time display with auto-brightness, that might take some work), and then I’ll finally have local voice control.
The current look is not amazing, but in the kitchen and living room, I can hide everything but the speaker and microphone, and for the bedroom I’ll need a different solution anyway because of the screen.
It was hyperbole, unless his sandwich costs 200-300k. Which is the reason why his statement was very questionable.