Skip Navigation

Posts
0
Comments
64
Joined
2 yr. ago

Permanently Deleted

Jump
  • You're right! not sure why I thought SimpleX was a fork, it's definitely just using the Signal protocol. Thanks for the clarification. That said, I would objectively state the UX needs some work to get to where Signal is at. SimpleX is oddly both easy to use but confusing and unreliable. I've been using it for a little over a year now and very often messages just stop getting delivered or received, forcing a fall back to Signal.

    SimpleX is still very promising and more secure than Signal if your threat model necessitates it, but I continue to champion Signal for its ease of use, reliability, and security compared to more mainstream messengers.

  • Permanently Deleted

    Jump
  • keep spreading FUD, my guy 😎

  • Permanently Deleted

    Jump
  • The day security researchers say Signal is bad is the day I'll stop using it. Until then, it's the best option we have that both provides both great privacy and UX. The only thing that comes close - and it still has a ways to go - is SimpleX, but it's basically a signal fork and it's devs still support Signal.

  • nearly daily Apex player moving to Linux full time again now that many more games work with it (knowing Apex no longer works). It will suck, but fuck microsoft and good riddance EA.

  • With your first sentence, I can say you’re wrong.

    except i'm not wrong. the model they ran is 4 orders of magnitude smaller than even the smallest "mini" models that are generally available, see TinyLlama1.1B [1] or Phi-3 3.8B mini [2] to compare against. Most "mini" models range from 1 to about 10 Billion parameters, which makes running them incredibly inefficient on older devices.

    That doesn’t mean it can’t run it. It just means you can’t imagine that.

    but I can imagine it. in fact, I could have told you it would have needed a significantly smaller model in order to run at an adequate pace on older hardware. it's not at all a mystery, its a known factor. i think it's absolutely cool that they did it, but lets not pretend its more than what it is - a modern version of running Doom on non-standard hardware.

    [1] https://huggingface.co/TinyLlama/TinyLlama-1.1B-step-50K-105b

    [2] https://ollama.com/library/phi3:3.8b-mini-128k-instruct-q5_0

    [3] https://www.thirtythreeforty.net/posts/2019/12/my-business-card-runs-linux/

  • but the hardware is not capable. it's running a miniscule custom 260k LLM and the "claim to fame" is that it wasn't slow. great? we already know tiny models are fast, they're just not as accurate and perform worse than larger models, all they did was make an even smaller than normal model. this is akin to getting Doom to run on anything with a CPU, while cool and impressive, it doesn't do much for anyone other than being an exercise in doing something because you can.

  • Checkout Notesnook. I've tried most of the ones you've listed and have been really enjoying how well it works compared to the competition considering its end-to-end encrypted.

    A few features:

    • Clients and server are open source.
    • End-to-end encrypted note syncing.
    • You can publish public notes.
    • You can publish privates notes that require a password to view.
    • You can self-host the sync server.
    • You can self-host the publishing server.
    • Full offline mode.
    • At rest encryption.
    • Multi-platform clients with feature parity (Android, iOS, Linux, Windows, MacOS, Web).
    • Most if not all of the general features you'd expect from a notes taking application.

    One thing I really like about the project is how open they are about what they're doing, why they're doing it and what the future holds. It's been great seeing their roadmap (https://notesnook.com/roadmap/) and seeing promised features land with new ones being added, and I've only been using it for less than a year now!

  • did they comment (maybe I missed it) on why they're ending development?

  • they think because he inherited a recovering economy, that he himself had some major part in it.

  • Actually yes. They want to privatize it so that they can make money on it. Failure is the goal.

    Actually yes. They want to privatize it so that they can make money on it further exploit the working class. Failure is the goal.

    Although you're right, I like to call out what it will do to everyone so it's more explicit and will hopefully click in people's minds.

  • Trump's is Putin's puppet. He's set to destroy whatever he can.

  • Person of the year

    Jump
  • its more about what he represents

  • Money could maybe provide more resources to care for people, but the core issue here is that adults who were foster children lack the support of a family - which no amount of money can fix.

    billions in dollars taken from billionaires to help them for a few more years would absolutely help. maybe not all of them, but any that it does help would be well worth it. billionaires don't need more than one yacht.

  • 100% agree.

    For anyone who may disagree, consider thinking of excess wealth as excess food.

    If you were in a stadium full of people that represent all of humanity, and you have more food than you could ever even eat in multiple lifetimes are you not an evil person for not sharing with those who are literally starving to death?

    These are people with the amount of wealth who could easily subsidize paying a team of people to plan out how to appropriate give away most of their wealth so they don't have "excess food" by the time they die - and not have it impact their day to day lifestyle. Instead they let others starve.

  • I was also not sure what this meant, so I asked Google's Gemini, and I think this clears it up for me:


    This means that the creators of Llama 3.3 have chosen to release only the version of the model that has been fine-tuned for following instructions. They are not making the original, "pretrained" version available.

    Here's a breakdown of why this is significant:

    • Pretrained models: These are large language models (LLMs) trained on a massive dataset of text and code. They have learned to predict the next word in a sequence, and in doing so, have developed a broad understanding of language and a wide range of general knowledge. However, they may not be very good at following instructions or performing specific tasks.
    • Instruction-tuned models: These models are further trained on a dataset of instructions and desired outputs. This fine-tuning process teaches them to follow instructions more effectively, generate more relevant and helpful responses, and perform specific tasks with greater accuracy.

    In the case of Llama 3.3 70B, you only have access to the model that has already been optimized for following instructions and engaging in dialogue. You cannot access the initial pretrained model that was used as the foundation for this instruction-tuned version.

    Possible reasons why Meta (the creators of Llama) might have made this decision:

    • Focus on specific use cases: By releasing only the instruction-tuned model, Meta might be encouraging developers to use Llama 3.3 for assistant-like chat applications and other tasks where following instructions is crucial.
    • Competitive advantage: The pretrained model might be considered more valuable intellectual property, and Meta may want to keep it private to maintain a competitive advantage.
    • Safety and responsibility: Releasing the pretrained model could potentially lead to its misuse for generating harmful or misleading content. By releasing only the instruction-tuned version, Meta might be trying to mitigate these risks.

    Ultimately, the decision to release only the instruction-tuned model reflects Meta's strategic goals for Llama 3.3 and their approach to responsible AI development.

  • I bought a 20TB external hard drive a year ago for 0.015 cents per GB. This was after taxes, so it was technically cheaper.

    $301.69/20,000 = 0.0150