Hell I think there's a solid argument to be made that it's not even a sustainable model for the biggest players. As it stands they're offering remarkably little functionality for how much it costs them. On the other hand, mozillas work in this space up until now has largely been on bringing previously unimaginable functionality to locally hosted open source models and datasets. And that does look to be a sustainable business model.
The article suggests that this is solved by repushing the original content with the mistake flag but I would argue that a notification may even be appropriate. Similar to how mastodon sends notifications for edits.
Developer Rimu is also emphasising Trust and Safety, and healthy community interactions. One way PieFed does this is by adding the ability for authors to add a ‘I’ve changed my mind’ setting. It draws inspiration from Nick Punt’s work on de-escalation on social media.
Love that people are trying out different mechanisms for encouraging a less toxic social experience. Big tech has run engagement driven social for so long I think a lot of us had largely given up on the idea. That said, I really think a lot of the toxic cultural quirks that have even followed us here are a direct result of their engagement driving priorities and given enough time away from them people will skew kinder.
Probably for the best. They'd been spinning their wheels while sucking most of the oxygen out of the room for several years now. Time for somebody else to give it a go
Peertube absolutely also has this problem but I'm not sure general peertube instances really make sense at this point anyhow. There are a couple of hobby instances but if you're not into that there's not a whole lot you can do.
Before it gets more love I think it probably needs a flagship instance. Friendica's one of a handful of older fediverse projects where it is legitimately difficult to find an instance to sign up on.
I think it's worth keeping the idea of party reform in mind. If fascism is successfully repressed there will be significant changes that are long overdue
This is legitimately a great idea. I recommend putting it on lemmy.ml to start with as they are the closest we've got to a flagship instance. Would say lemmy.world if they weren't down all the time
Others have mentioned this but I'd like to specifically emphasize crossposting. It's really only reasonable to post in the promo subs when you first get started and maybe after a significant change but you can and should continuously crosspost to relevant communities.
I would also suggest mentioning your community on other parts of the fediverse as the microblogging folks are just as capable of contributing.
I really don't understand your argument. The best case scenario here is that LLMs become easily accessible and are largely unmonetized. That is OpenAI does not sell usage of the model nor are the models trained on things like news articles but instead look more like the OpenAssistant dataset (no relation to OpenAI).
Instead LLMs are strictly a conversational interface for interacting with arbitrary systems. My understanding of the limitations of this technology (I work in this space) means that's the only thing we can ever hope for them to do in a resource efficient way. OpenAI and co have largely tried to obfuscate this fact for the purpose of maintaining our reliance on them for things that should be happening locally.
Edit: jk I'm gaslighting you because I'm a corporate plant. Trickle down trickle down Ronald Reagan is God
Edit 2: To add a little bit of context, OpenAI's business model currently consists of burning money and generating hype. A ruling against them would destroy them financially as the there's no way theyd be able to pay for all of their training data on top of the money they're already using.
Most likely the times could win a case on the first point. Worth noting, Google also respects robots.txt so if the times wanted they could revoke access and I imagine that'd be considered something of an implicit agreement to it's usage. OpenAI famously do not respect robots.txt.
Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.
I don't think NYT contributors should expect a payday out of this but the precedent set may mean that they could expect some royalties for future work that they own outright. The precedent is really the important part here and this will definitely not be the only suit.
Your first point is probably where we're headed but it still requires a change to how these models are built. Absolutely nothing wrong with an RAG focused implementation but those methods are not well developed enough for there to be turn key solutions. The issue is still that the underlying model is fairly dependent on works that they do not own to achieve the performance standards that that've become more or less a requirement for these sorts of products.
With regards to your second point is worth considering how paywalls will factor in. The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.
Your third point is wrong in very much the same way. These models do not have a built in reference system under the hood and so cannot point you to the original source. Existing implementations specifically do not attempt to do this (there are of course systems that use LLMs to summarize a query over a dataset and that's fine). That is the models themselves do not explicitly store any information about the original work.
The fundamental distinction between the two is that Google does a basic amount of due diligence to keep their usage within the bounds of what they feel they can argue is fair use. OpenAI so far has largely chosen to ignore that problem.
That actually seems like a win win minus all the extra car accidents. A small price to pay for productivity