Skip Navigation

Greg Clarke
Greg Clarke @ Greg @lemmy.ca
Posts
54
Comments
669
Joined
3 yr. ago

Permanently Deleted

Jump
  • Canada is under utilizing that new second land border with Denmark

  • OP can use a Cloudflare tunnel which would take care of caching and prevent any accidental DDoS attacks.

  • Permanently Deleted

    Jump
  • 182cm and satisfied though I wouldn't mind playing around with different heights for a day. Like being 10cm tall so I could play with my daughter as one of her toys. Or being 10m tall so I could clean my gutters.

  • TIL about waterway charts! I've only ever used maritime charts, even when traversing canal systems. I'm going to see if I can find up to date waterway charts for my local area.

  • It was painful scrolling through his feed 😅

  • I couldn't find this tweet on Elon Musk's Twitter feed

  • Permanently Deleted

    Jump
  • It's scary that some people's first instinct to get reliable information is to ask Facebook. But to be fair, government websites are usually difficult to get navigate and Google search is useless nowadays.

  • It is impossible to accurately quantify how many deaths and human years lost can be directly attributed to Brian Thompson. UnitedHealthcare themselves may have some data but there are also indirect deaths and shortened life spans as a result of denied or delayed treatment.

  • That's my point, if the model returns a hallucinated source you can probably disregard it's output. But if the model provides an accurate source you can verify it's output. Depending on the information you're researching, this approach can be much quicker than using Google. Out of interest, have you experienced source hallucinations on ChatGPT recently (last few weeks)? I have not experienced source hallucinations in a long time.

  • Generative AI is a tool, sometimes is useful, sometimes it's not useful. If you want a recipe for pancakes you'll get there a lot quicker using ChatGPT than using Google. It's also worth noting that you can ask tools like ChatGPT for it's references.

  • Ugh, another solar system joke, 1 Star

  • I guess this guy doesn't like to get misgendered

  • I know the difference. Neither OpenAI, Google, or Anthropic have admitted they can't scale up their chat bots. That statement is not true.

  • Currently very few jobs should be replaced with AI. But many jobs should be augmented with AI. Human-in-the-loop AI amplify the finate resource of smart humans.

  • That may have been true for the early LLM chatbots but not anymore. ChatGPT for instance, now writes code to answer logical questions. The o1 models have background token usage because each response is actually the result of multiple background LLM responses.

  • The title of the article is literally a lie which is easily fact checked. Follow the links to quotes in the article to see what the quoted individuals actually said about the topic.

  • No, a chat bot as it's talked about here is not an LLM. This article is discussing limitations of LLM training data and inferring that chat bots can not scale as a result. There are many techniques that can be used to continue to improve chat bots.

  • I'm sorry if I'm coming across as condescending, that's not my intent. It's never been "as simple as just throwing more data and CPU at the problem". There were algorithmic challenges for every LLM evolution. There are still lots of potential improvements using the existing training data. But even if there wasn't, we'll still see loads of improvements in chat bots because of other techniques.

    Edit: typo

  • People that don't understand those terms are using them interchangeably