• trollercoaster@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      Even if you could, you’d need a functioning technical solution first, and not a machine that’s just built to superficially pretend it’s such a solution.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    13
    ·
    1 month ago

    Factiverse’s success rate is around 80%

    That seems pretty close to useless even if we assume that the criteria they’ve used to define success are spot on. Funny how they don’t mention it until the second to last paragraph.

  • doctortofu@reddthat.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    That’s great and all, but it assumes that people actually WANT true and correct information instead of sound bites that confirm their biases and align with their tribalistic mindset, which is quite often simply not the case…

  • pulsey@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 month ago

    As of today, Factiverse says it outperforms GPT-4, Mistral 7-b, and GPT-3 in its ability to identify fact-check worthy claims in 114 languages.

    what about newer models? These are super old.