Any good linux voice changer?
pe1uca @ pe1uca @lemmy.pe1uca.dev Posts 37Comments 239Joined 2 yr. ago
I'm looking at this in eternity and seems only spoilers don't work from the post you linked.
User and community links work properly.
I've been using a TAPO C200, it just required the initial setup to be connected to the internet to configure via de app, afterwards I blocked the internet traffic at router level, the feed is processed through https://frigate.video/ which I selfhost in a mini PC, not sure how well it'll perform on a PI.
Check the most upvoted answer and then look into tubearchivist which can take your yt-dpl parameters and URLs to download the videos plus process them to have a better index of them.
Do you mean a community?
Like having your own !nostupidquestions@lemmy.world but with a different name?
It depends on your instance if they allow anyone to create a community or not, there's a configuration in the admin panel to restrict creating them to only admins.
If your instance allows it, then you can go into the home page and see a button on the left side which says "Create a Community" right above "Explore Communities".
Then you just have to fill up the data and click "Create".
I only had to run this in my home server, behind my router which already has firewall to prevent outside traffic, so at least I'm a bit at ease for that.
In the VPS everything worked without having to manually modify iptables.
For some reason I wasn't being able to make a curl call to the internet inside docker.
I thought it could be DNS, but that was working properly trying nslookup tailscale.com
The call to the same url wasn't working at all. I don't remember the exact details of the errors since the iptables modification fixed it.
AFAIK the only difference between the two setups was ufw enabled in the VPS, but not at home.
So I installed UFW at home and removed the rule from iptables and everything keeps working right now.
I didn't save the output of iptables before uwf, but right now there are almost 100 rules for it.
For example since this is curl you’re probably going to connect to ports 80 and 443 so you can add --dport to restrict the ports to the OUTPUT rule. And you should specify the interface (in this case docker0) in almost all cases.
Oh, that's a good point!
I'll later try to replicate the issue and test this, since I don't understand why OUTPUT
should be solved by an INPUT
rule.
Well, it's a bit of a pipeline, I use a custom project to have an API to be able to send files or urls to summarize videos.
With yt-dlp I can get the video and transcribe it with fast whisper (https://github.com/SYSTRAN/faster-whisper), then the transcription is sent to the LLM to actually make the summary.
I've been meaning to publish the code, but it's embedded in a personal project, so I need to take the time to isolate it '_
I've used it to summarize long articles, news posts, or videos when the title/thumbnail looks interesting but I'm not sure if it's worth the 10+ minutes to read/watch.
There are other solutions, like a dedicated summarizer, but I've investigated into them and they only extract exact quotes from the original text, an LLM can also paraphrase making the summary a bit more informative IMO.
(For example, one article mentioned a quote from an expert talking about a company, the summarizer only extracted the quote and the flow of the summary made me believe the company said it, but the LLM properly stated the quote came from the expert)
This project https://github.com/goniszewski/grimoire has in it's road map a way to connect to an AI to summarize the bookmarks you make and generate at 3 tags.
I've seen the code, I don't remember what the exact status of the integration.
Also I have a few models dedicated for coding, so I've also asked a few pieces of code and configurations to just get started on a project, nothing too complicated.
Ah, that makes sense!
Yes, a DB would let you build this. But the point is in the word "build", you need to think about what is needed, in which format, how to properly make all the relationships to have data consistency and flexibility, etc.
For example, you might implement the tags as a text field, then we still have the same issue about addition, removal, and reorder. One fix could be have a many tags to one task table. Then we have the problem of mistyping a tag, you might want to add TODO
but you forgot you have it as todo
, which might not be a problem if the field is case insensitive, but what about to-do
?
So there are still a lot of stuff you might oversight which will come up to sidetrack you from creating and doing your tasks even if you abstract all of this into a script.
Specifically for todo list I selfhost https://vikunja.io/
It has OAS so you can easily generate a library for any language for you to create a CLI.
Each task has a lot of attributes, including the ones you want: relation between tasks, labels, due date, assignee.
Maybe you can have a project for your book list, but it might be overkill.
For links and articles to read I'd say a simple bookmark software could be enough, even the ones in your browser.
If you want to go a bit beyond that I'm using https://github.com/goniszewski/grimoire
I like it because it has nested categories plus tags, most other bookmark projects only have simple categories or only tags.
It also has a basic API but is enough for most use cases.
Other option could be an RSS reader if you want to get all articles from a site. I'm using https://github.com/FreshRSS/FreshRSS which has the option to retrieve data form sites using XMLPath in case they don't offer RSS.
If you still want to go the DB route, then as others have mentioned, since it'll be local and single user, sqlite is the best option.
I'd still encourage you to use any existing project, and if it's open source you can easily contribute the code you'd have done for you to help improve it for the next person with your exact needs.
(Just paid attention to your username :P
I also love matcha, not an addict tho haha)
I can't imagine this flow working with any DB without an UI to manage it.
How are you going to store all that in an easy yet flexible way to handle all with SQL?
A table for notes?
What fields would it have? Probably just a text field.
Creating it is simple: insert "initial note"... How are you going to update it? A simple update to the ID won't work since you'll be replacing all the content, you'd need to query the note, copy it to a text editor and then copy it back to a query (don't forget to escape it).
Then probably you want to know which is your oldest note, so you need to include created_at
and updated_at
fields.
Maybe a title per note is a nice addition, so a new field to add title
.
What about the todo lists? Will they be stored in the same notes table?
If so, then the same problem, how are you going to update them? Include new items, mark items as done, remove them, reorder them.
Maybe a dedicated table, well, two tables, list metadata and list items.
In metadata almost the same fields as notes, but description
instead of text
. The list items will have status
and text
.
Maybe you can reuse the todo tables for your book list and links/articles to read.
so that I can script its commands to create simpler abstractions, rather than writing out the full queries every time.
This already exists, several note taking apps which wrap around either the filesystem or a DB so you only have to worry about writing your ideas into them.
I'd suggest to not reinvent the wheel unless nothing satisfies you.
What are the pros of using a DB directly for your use case?
What are the cons of using a note taking app which will provide a text editor?
If you really really want to use a DB maybe look into https://github.com/zadam/trilium
It uses sqlite to store the notes, so maybe you can check the code and get an idea if it's complicated or not for you to manually replicate all of that.
If not, I'd also recommend obsidian, it stores the notes in md files, so you can open them with any software you want and they'll have a standard syntax.
Well, the thing is, if the physics or render steps (not necessarily the logic) don't advance there's no change in the world or the screen buffer for the pc to show you, so, that's what those frame counters are showing, not how many frames the screen shows, but how many frames the game can output after it finishes its calculations. You can also have a game running at 200 frames but your screen at 60.
So, when someone unlocks the frame rate probably they just increased the physics steps per second which has the unintended consequences you described because the forces are not adjusted to how many times they're being applied now.
And a bit yeah, if you know your target is 30fps and you don't plan on increasing it then it simplifies the devlopment of the physics engine a lot, since you don't have to test for different speeds and avoid the extra calculations to adjust the forces.
"Latinx" or "Latine"?
Neither, use "Latino", that's the gender neutral form.
Or if you don't want to use it and don't want to follow Spanish rules then follow English rules and use "Latin".
What? AI search in Firefox? Haven't seen it, tho I have a custom search engine.
How is the setting in brave related to Firefox?
A browser is also an app, it's not embedded into an OS.
In some it comes preinstalled, which is the case for safari in ios.
Also, some browsers just happen to include a PDF reader, but not all had or currently have.
As @viking@infosec.pub mentioned, Android is meant to be lightweight so it can run in several configurations of hardware.
Most OEM do include a browser in Android, this usually being Chrome.
GrapheneOS is basically just Android (in this context) so it doesn't include a browser or a PDF reader, but both are available as a dedicated app from the GrapheneOS team or with any other app you want, for example I use readera, or sometimes Firefox.
I was juggling like that, I had most of my files in NTFS so I could read them in windows, even for files read only by Linux programs.
Most programs were able to read from any part of the file system, but for those with strict paths I used symlinks.
But I haven't had any use for Windows lately so I decided to delete all but one NTFS partition and this last one is only 256GB with 100GB free.
The rest of the data I moved it to ext4 and btrf partitions.
You can explain them google is already integrated, they can use the address bar to type their query and hit enter to ask Google whatever they want.
Could you get a fountain?
Specially if it spreads the water like rain, it'll help cool the air around, you'd just have to change the water every so often.
If where you live is not very humid you could investigate into swamp coolers.
You have to configure the space bar to allow a long-press to give you a popup menu to select another language. Currently there's no way to have multiple languages with the same layout, it's annoying but you can work with it.
Oh, I was only aware of credits where the lender sets the amount to be the total exactly spread over the period, those are the only ones I've seen and taken, so each month I get a charge for the amount needed to keep up with the credit.
For the rest then it makes sense how they make money, since I've had credit cards which don't show or at the very least hide the amount to not pay interest and only tell you the minimum payment.
Well, it's just a TS project with a very simple Dockerrile, you can just bun install && bun run prod
.
The rest of the dependencies aren't included in the docker image.
I haven't completely looked into creating a model for piper, but just having to deal with a dataset is not something I look forward to, like gathering the data and all of what this implies.
So, I'm thinking it's easier to take an existing model and make adjustments to fit a bit better on what I would like to hear constantly.