The rules for bots
TehPers @ TehPers @beehaw.org Posts 0Comments 688Joined 2 yr. ago
It looks like some generics are pretty cheap, but many inhalers look to be unreasonably expensive here. Thanks US healthcare system :(
At least if you can find one of the generics, it can be under $40 it looks like.
I was remembering the cost of my wife's regular-use inhalers (she used to have pretty bad asthma) which were over $100 each, but it looks like daily-use ones are different than emergency ones. It could also just be a US thing.
Having never had asthma myself, I never really thought to look up the differences or prices to be honest.
Odd, my google search just has a bunch of Lemmy/Mastodon results. Surely I'm doing something wrong, do I need to disable SafeSearch?
You had me curious so I went and looked it up, and it looks like emergency inhalers actually can be found for under $100, if only barely! TIL, I figured you needed insurance to cover it to even consider buying one.
Donations don't generate nearly as much money as purchases. I like open source just as much as the next person, but there's no way that I could afford to drop my job and go full time into an open source project. Looking at Lemmy's donations for example, the annual budget on OpenCollective is, at this time, ~$10k (which is significantly below US minimum wage at least), and their Patreon link shows $1650/mo. It's a nice chunk of cash, but not sustainable.
One approach I've really liked is what Aseprite does. You can buy the precompiled product, or you can clone the repository and build it yourself. Most users won't build it, so they get paid and still get to share the code with the community.
This is what I do. On the other site, I only really followed two subs, and on this one, I follow closer to 10 communities all oriented around the content of those two subs. Only one of those communities is hosted by Beehaw.
Sometimes I switch to "Local" though to see if anything of interest is going on, but most of the content I view is in "Subscribed". Sure there's less content, but I don't really view it as an issue if it takes me 30m-1hr to get through it all throughout the day. It's not like I'm spending my whole day on Lemmy, this just incentivizes it less :)
These rules seem great honestly. The main bot that comes to mind is the TL;DR bot, which one could easily prompt for in a post if they want a TL;DR, if those communities want to enable it for that specific community. Eventually, a list of promptable bots could pop up in one of the instances so that people know which bots are available to be prompted. Alternatively, someone could make a website to list them or something. I can see there being a healthy bot ecosystem forming based on people's needs.
Since we have more control over the source code, I think eventually what would be nice are community plugins to replace some of the functionality of these bots. For example, a plugin could de-AMP a link, or could provide a banner indicating the rules on a post. If someone really wanted to, they could make a plugin to auto-generate summaries of articles too and include it somewhere in the UI. Since these rules are for Beehaw specifically, I don't think bots which create new posts are that relevant, since there aren't really any niche-specific communities (like a bot which posts changelogs for a game or something), just broad communities.
Any bots not clearly labelled as bots should be given a warning, then banned from the instance in my opinion. The bot setting exists for a reason, bypassing it indicates that the bot author is not willing to respect the rules of the communities the bot is posting in.
Wait CSS supports nesting? Since when?
Also, SASS's mixins, variables (CSS has variables too but they work differently), and many of the other features are hard to beat, which is why I feel like it's almost never a negative to get it up and running on any project I work on (unless the project is a tiny one-off, then it's probably not worth it). One thing I like is that it's close to CSS - increasing the transparency - while still providing abstractions that let you save time and increase consistency across the project.
Looks like labels don't work on async
blocks, but using a nested inner block does work. Also, yeah, async
blocks only exist to create Future
s, they don't execute until you start polling them. I don't really see any reason why you couldn't stick a label on the async block itself though, breaking would at worst just create one new state for the future - early returned.
I encourage you to find a name for this function that describes why there is a second inner function. One restriction - the name of the function must be run
(that's what the trait being implemented calls it, you can't rename it).
Sure, you can call the inner function run_inner_to_fix_rustc_issue_probably_caused_by_multiple_fnmut_impls
but is that really any better than using two forward slashes to explain the context?
If I understand this correctly, isn't this solved by randomly adding delays on the cell towers to these delivery reports? I'm not too familiar with the SMS protocol, but I can't imagine adding a little jitter would hurt much of anything.
Also worth mentioning, but you can early-return from a block in rust too, just using the break
keyword and named blocks:
let x = 'my_block { if thing() { break 'my_block 1; } 2 };
Edit: I haven't tried this with async
blocks, but I'm guessing it works there too?
Continuing from my PC, if you wanted a simulated experience watching a lecture and answering quizzes and such, it might be that watching the lecture is more than enough, especially if you have the quiz answers and test answers. Strategies like this are not new, not AI-powered, and have been decently successful without needing to pay for any courses directly.
However, if you wanted a way to ask questions to a Q&A bot while the lecture is running, you could use a combination of some sort of semantic retrieval (where you're retrieving any relevant learning materials that are expected of you to explore as a student of the course) and providing the most recent lecture contents as context to the LLM.
For the retrieval part, I'd recommend looking at a vector database like Weaviate (potentially offline) or something like Azure Cognitive Search (online/cloud) to store snippets of the learning material - maybe sections of chapters or such - along with their embeddings (other options exist, but these are two that I've personally used). Note that the embeddings these databases use often come from an LLM, so for example with Weaviate, you'll need access to something for embedding generation. Then, you'd use the question to query the database (either keyphrases, or possibly directly as is) for the relevant snippets, and have some number of those as one part of your context. You can use a transcription of the lecture to provide the second part of the context. Then finally the third part of your context could be the actual question, along with the format you want it to respond in. This way you can limit the amount of context you need to provide to the LLM (instead of needing to provide the entire set of learning materials as context).
This would be a pretty complicated project though. It's not as simple as going on character.ai or ChatGPT and creating a carefully-crafted prompt :)
Edit: for limiting the knowledge of the LLM, this might just come down to selecting the right prompt, and even then it seems like it'd be a difficult challenge. I'm not sure that you'll have much success here with current LLMs to be honest, but play around and see if you can get it to avoid generating answers off of materials you shouldn't have learned yet.
I would caution against using an LLM alone for individualized curriculum. It can be a tool to assist with learning, but it's unreliable enough that you may find yourself being taught incorrect information, or stuck in a situation where the AI is unable to help you understand a concept due to being incapable of understanding you (or anything for that matter, LLMs don't "understand" anything).
If you're looking for a simulated experience, you won't be able to provide all the learning materials from a university as context. It's just too much info (and at least right now technically infeasible from what I know). Instead, you'd want to provide only relevant snippets of information and use those for generation. How you determine which snippets are relevant is up to you, but will most likely require an understanding of the subjects you want it to teach you. Maybe along the process of making this AI, you'll end up just learning the materials you wanted it to teach you anyway though.
Then again, some people might think the obfuscation in 20+ classes is somehow a good thing…
I'd argue that CSS is itself an obfuscation (read: abstraction), and isn't even implemented consistently across browsers. If the abstraction results in less duplication, more consistency across the website, and higher productivity, then I don't think that's a bad thing. (Of course, the flip side is that if using the abstraction results in the same learning curve with less transparency, then the benefits might not outweigh the cost.)
Having never used tailwind, I can't give a personal opinion on that, but I find it hard to work on any decently sized project without SCSS. Sure I can write pure CSS, but SCSS provides so many QoL features over raw CSS that it's hard to pass it up, and it's easy enough to get setup.
Status codes for batch operations is always a mess. Do you return a 400 because one request made no sense even if the rest succeeded, or return a 200? 207 exists but it's not really directly part of the HTTP spec and only seems to support XML response bodies.
Edit: @lysdexic@programming.dev if the RFC proposed a solution to responses for batch operations where some responses may contain errors, then that would be interesting. The RFC, from what I understand, proposes a format for error responses, but does not seem to support mixed error/success responses.
If both success and error responses include the success
field, then that can be a common discriminator between bodies of successful responses and bodies of error responses. Where this adds value beyond the request's status code, I'm not sure. Maybe it's useful in aggregated responses where partial successes are allowed (like POST
ing a batch of objects)?
Your format looks half baked and not thought all he way the way through.
This does seem a bit heavily worded. There's likely a reason they originally chose and continue to use that format, and it could be as simple as "all our other APIs use this format" or similar. There's more to choosing a response schema than what is theoretically the most efficient way of communicating the information.
Link seems to be broken for me, but adding your choice of {.html,.pdf,.txt,.xml}
to the end of the URL seems to fix it.
Self-documenting code only documents what the code does, not why it does it. I can look at a well written method that populates a list with random elements from another list and go "I know what that does!" but reading the code doesn't tell me the reason this code was written or why alternatives weren't chosen.
In the case of Rust, it goes even a step further when working with unsafe code. Sure I know what invariants need to be held for unsafe code to be sound, but not everyone does, and it isn't always clear why a particular assumption made in an unsafe block (the list has at least 5 elements, for example) can be made soundly.
I think adding 🤖 makes it stand out enough that even while skimming, I'd stop to look at what that is. Honestly this proposed format seems great, since it's short but stands out, and I can "opt-into" reading the tl;dr by clicking the spoiler.