Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ZS
Posts
1
Comments
666
Joined
2 yr. ago

  • It refers to the fact that feelings are not a reflection of the outside reality, but a reflection of one's perception of it. According to OP, this is proven by how feelings completely change by simply changing the way the brain perceives reality, via a psycotropic compound, while actual reality remains unchanged.

    This is a well known scientific and philosophical fact, that OP has only come to know recently thanks to personal experience with psycotropic drugs

    Such epiphany resulted in the shower thought we are commenting.

    beep beep, I am not a bot, this action wasn't performed automatically

  • They probably consider that they overall lose more with strong cryptography, than the risk of other countries intercepting US communications. They must have other solutions in place to protect confident information. But they likely struggle with encryption being so widely used by anyone. Even granmas can now cover their communications without much effort

  • That is not how it works. Your smartphone has all the dictionary available, same as LLM. It is simply something very different. People super confidently discussing about AI on lemmy are the real hallucinating parrots

  • I don't put wsj as reputable. I meant that even a journal considered reputable as wsj has been found publishing fake news in the past. That's why I say that I am pro filtering all Murdoch's media

    Edit. I added an adjective in the original comment to make it clearer

  • The real challenge is "how do users can judge what is a fake news?". In a similar situation it is an extremely difficult task even for newspapers with journalists on the field. See what's happening with the blame-shifting on the bombing of Gaza's hospital.

    Even guardian and bbc have trouble understanding where is the truth.

    A solution could be filtering the sources (for instance, no unknown blogs, or the sun and fox News, only reputable sources such as guardian and bbc). But important real news might be missed in this case, that are direct testimony of journalists on the field. And supposedly reputable sources such as wsj or similar are also known to have shared fake news, particularly when it comes to this conflict. And also reputable sources are biases.

    It is an extremely difficult topic. No one has a definitive answer unfortunately.

    I would be in favor of filtering at least the widely known sources of fake news (shady blogs, all Murdock's media and so on)

    Edit. An adjective to clarify

  • The whole year. Companies who lied off (meta, google, Microsoft) and did stock buybacks had a huge boost on the market.

    Stock market is demanding layoffs, from even before chatgpt took over. That's it. AI is just another keyword to push market price even further

  • You are mixing sci-fi level of cutting edge basic research (fusion), with commercial products (chatgpt). They are 2 very different type of proof of concepts.

    And both will likely revolutionize human society. Fusion will simply commercially become a thing in 30/50 years. AI has been on the market for years now. Generative models are also few years old. They are simply becoming better and now new products can be built on top of them

    (btw I already use chatgpt 4 productively every day for my daily work and it helps me a lot)

  • This is the reason I am suggesting people to give a try to perplexity.ai to understand how these tools will work in the near future. And why I don't understand the reason I am downvoted for that.

    Current "free" chatgpt was created as a proof of concept, not as a finished, complete solution for humanity issues. What we have now is a showcase of llm, for openai to improve the product via human feedback, for everyone else to enjoy what is it already now, with all its limitations, an extremely useful tool.

    But this kind of LLM is intended to be a building block of the future solutions. To enable interactivity, summarization, analysis features within larger products with larger and more refined set of features.

    If you don't have paid version of chatgpt, again, try perplexity.ai with the copilot feature, to see a (still imperfect, under development) proof of concept of the near future of AI assisted research.

    And more tools will come, that will make easier to navigate the huge amount of information that is the main issue of modern internet.

    For your specific case, gpt 3.5 has poor logical and mathematical capabilities. Gpt-4 is much better with that. But still, using a language model for math is almost never a good choice. What you'd need, is an llm able to access information from the internet and to have access to some math tool, such as python or Matlab. These options currently are available on chatgpt with plugin, but they are suboptimal. In the future you'll have better product able to combine llm, focused internet search and math.

    We should focus on the future, not on the present when discussing AI. LLMs based products are in their infancy

  • Because in many languages that are not English, America is the whole continent. What Americans call "America" is United States of America. We need a word for their citizens, that is not the name of the inhabitants of the whole continent

  • No, its taste isn't something that Italians would appreciate. It tastes like a cheap mix of a cheap German frankfurter and cheap calabrese 'nduja. It's almost impossible to find it in Italy.

    I have never seen it in my life in Italy