You’re getting downvoted, but the only way a business buys frameworks is if they’re running a pilot program. They are just not proven in that environment yet.
For a dev going to a coffee shop.. sure. It’s your work laptop.
This right here. Disney and Nintendo get no more of my hard earned $$$. These companies create iconic products and they get paid for it. All of the DRM anti-consumer over-reach is where I draw the line though.
Thank you for the work on this extension. I understand the idea and the simplicity, and laud the work that went into it.
Ideally, security should be composed of these visible little file connectors like you have here. They’re auditable near by definition (external calls,obfuscation,hidden exploits).
The main issue, and a sad state of affairs today, is hardware based security devices or “enclaves”. These are all closed, and many big players lock them down.
Ideally, the TOTP key should be stored in an enclave to reduce the chance of a key leaking. Master key is nice, but it’s still a software control on an assumed compromised system.
With that said, isn’t the whole benefit of TOTP the human interaction? If I see one browsers request login, then the code comes from the keyboard, it’s difficult to use that code again. If there is a keylogger you get one session. When it’s stored locally, you can automate it and login from anywhere once you crack or sniff the master password.
I really like this type of security, it’s likely the future, but the entrenched industry makes securing properly by anyone who isn’t a big player very difficult.
Passkeys try to get around this by making them disposable per device locked in an enclave. Hardware assurance is a roadblock.
And maybe you’ve run the numbers to realize that those “terrors tunnels” would cost too much to be real. It’s a good excuse to ethnically cleanse a region you have no right to.
Most people here don’t understand what this is saying.
We’ve had “pure” human generated data, verifiably so since LLMs and ImageGen didn’t exist. Any bot generated data was easily filterable due to lack of sophistication.
ChatGPT and SD3 enter the chat, generate nearly indistinguishable data from humans, but with a few errors here and there. These errors while few, are spectacular and make no sense to the training data.
2 years later, the internet is saturated with generated content. The old datasets are like gold now, since none of the new data is verifiably human.
This matters when you’ve played with local machine learning and understand how these machines “think”. If you feed an AI generated set to an AI as training data, it learns the mistakes as well as the data. Every generation it’s like mutations form until eventually it just produces garbage.
Training models on generated sets slowly by surely fail without a human touch. Scale this concept to the net fractionally. When 50% of your dataset is machine generated, 50% of your new model trained on it will begin to deteriorate. Do this long enough and that 50% becomes 60 to 70 and beyond.
Human creativity and thought have yet to be replicated. These models have no human ability to be discerning or sleep to recover errors. They simply learn imperfectly and generate new less perfect data in a digestible form.
For IDE, VSCode is the usual recommendation. Some of the plugins really help making code readable and digestible.