The problem is a lot of people who want to learn to code, and are conditioned to desire the college route of education, don't actually know that there is a difference and that you can be completely self-taught in the field without ever stepping foot in a university.
Additionally it's going to cause you headaches if your server is low spec. The federation queue is not well optimized for GIGANTIC subscription counts like this. There is an active draft PR working on it, but using that script is still a bad idea.
spotdl, like pytube, is looking up the tracks on Youtube and then using youtube-dl to grab the audio. It's not FLAC, but it's perfectly good for my needs.
Honestly just go for it, it's pretty straightforward! I'd share my chat transcript but it at points contained things like my API keys.
I can however give some excerpts from the conversation:
You are a senior software engineer. Create a python script that logs into a website using the selenium/standalone-chrome docker container
This was actually my first time using the "You are a senior software engineer" bit, but I've heard a few people saying it works. I came across the idea for using Selenium from this prompt:
Please write a python script to load a website, login, navigate to a URL, and then scrape all of the text that matches a CSS selector
In fact here is the chat transcript for that one. Once I got to the end of this transcript I decided to try out the code. I realized selenium was using my installed browser and that wasn't going to work once I moved this to a server. That was when I moved into a new chat that contains what became the final script, where I started the conversation with this prompt:
I need to write a script that submits a form on a webpage. This script needs to be run from a VPS that does not have a browser
It was in this conversation that I learned about using the headless chrome container. Everything I did was a combination of prompting for additions and reading the documentation on what that was capable of.
I will regularly ditch a chat thread and take the output from a previous one into a new one, as it takes the previous context of the conversation into account for informing future generation, and sometimes I want to pivot or I want to focus in on a specific approach.
Once I had a more focused idea of what the tech stack was going to be it was just a matter of prompt what I needed, test, feed it back any errors and get corrections, notice something wrong (like it wasn't appending .mp3) to the files, or something else I wanted to change, and prompt it in plain english.
There's all kinds of people saying you should use X method and Y approach, but I find I get great results by just being clear and concise with what I'm looking for, as I would when speaking to another developer.
Edit: To more directly answer your question, this is using the "Public Pages" feature that is already built into Azuracast along with a bunch of custom CSS to make it look nicer
ChatGPT and GPT in general is like having a pair programming intern who simultaneously knows everything but is also capable of making really dumb and obvious mistakes. When combined with the new code interpreter it's crazy powerful, but at the end of the day to get the best results you need a skilled human operator guiding its outputs.
That said, give it a shot as it can definitely help you to improve your skill by explaining what a block of code does.
I've gone ages without using Spotify and found the list still regularly updates regardless of whether or not I'm actually there listening. This is also why I threw in the Last.FM recommendations though, so I can have something more dynamic based on my current listening.
Both spotdl and pytube are downloading from Youtube as their source, my understanding is they're able to grab 320kbps audio if it's available. It's no FLAC ripped from CD, but it's good enough for my use case since I don't want to drag torrenting or usenet into my VPS
I don't see why it wouldn't be possible to swap out the docker container running Chrome with another that is running Chromium or Firefox. The only interaction with the browser itself is via Selenium, which should be agnostic. I just went with what ChatGPT was able to suggest immediately.
I should clarify this is running a headless browser, so you don't actually need Chrome installed, it exists entirely within the confines of the container and is completely ephemeral. You could also modify this to work with the standard Selenium webdriver and your installed browser of choice, but I made this with the intention of running it on my server rather than my personal machine.
Then that means the cost of the devices has already been factored into the base price of your contract. Nothing is free, companies don't make money by giving away phones that they paid for. You're just paying a higher base fee regardless of whether or not you get the hardware from them. By not taking the phone from them you're increasing the profit margin they make on your monthly contract.
It's not free, the cost is built into your monthly payment. This is how all carrier supplied phones are funded. If they allowed you to bring your own unlocked device they could charge you less. They wouldn't, but they could because they wouldn't need to recoup the hardware expense.
Would you be able to give a further breakdown of the Lemmy associated costs?
I'm really curious to know what the current operational costs are for a Lemmy server. I was hoping to be able to do some very rough math to calculate monthly cost per user, but with these current numbers I have to include users from Mastodon and Calckey.
@ruud@lemmy.world I was considering setting up a Calckey instance. Is there anywhere I can read more about this name change and new release? They only have two blog posts and neither cover this. I'm not already in the Mastodon/Calckey ecosystem so I don't know where to look for information
Lemmy is a platform managed by a disparate group of operators all with different levels of experience and commitment.
Verifying identity online is both a hard problem and a legal/security nightmare. It involves validating and possibly storing things like government identification or other sensitive personally identifiable information.
There is no way this will ever be implemented in the core platform. All existing solutions today are outsourced to third party companies with the expertise in validating different forms of identification as well as the legal insurance required to warehouse it.
And all of this is setting aside the obvious fact that you should not be required to doxx yourself in order to view pornographic content online. Minors will just go somewhere else outside of the jurisdiction of these rules and still get access. Hell, just turn off safe search on Bing and you can find porn.
Measures like this don't actually stop minors from accessing pornography. They only put law abiding citizens at risk by forcing them to trust private companies with their identification and hope their government doesn't decide to further police their morality, or use their revealed sexual preference against them.
Anyone remember Firesheep?