Welcome home, fellow Gator.

The Gator Nation's oldest and most active insider community
Join today!
  1. Hi there... Can you please quickly check to make sure your email address is up to date here? Just in case we need to reach out to you or you lose your password. Muchero thanks!

AI - Potential new fact checker

Discussion in 'Too Hot for Swamp Gas' started by G8trGr8t, Nov 28, 2024.

  1. G8trGr8t

    G8trGr8t Premium Member

    32,414
    12,159
    3,693
    Aug 26, 2008
    Trying to think of a way that we can learn to work off of the same page of information/facts led me to a bigger concern. As I pondered the concept of AI becoming a fact checker to monitor social media, I began to think of the monster AI complex that Musk is building and how Huang seems to be fawning over Musk and his abilities to force something through and I got a bad feeling. Opposing AI's, living in different realities because they are programmed drastically different, enhancing the divison. Will Elon's budding AI take over gubmnt functions? Would we know if it did?

    Generative AI is already helping fact-checkers. But it’s proving less useful in small languages and outside the West | Reuters Institute for the Study of Journalism

    The team at MythDetector, a fact-checking platform in Georgia, has also begun experimenting with generative AI. Editor-in-Chief Tamar Kintsurashvili says that artificial intelligence has been helping them detect harmful information that’s spreading on digital platforms through a matching mechanism where they label some content as false and AI is able to find similar false content online.

    “When we label some content, then AI finds similar content,” she explains. “Hostile actors have learned how to avoid social networks’ policies so sometimes coordination is achieved not through distribution of content through Facebook pages, but through individuals mobilised by political parties or far-right groups. Since we have access to only the public pages and groups, it's not always easy to identify individuals through social listening tools, such as CrowdTangle.”

    It is worth pointing out that Meta announced that it will be shutting down CrowdTangle by August 2024, months before the Georgian election in October. The company said it is replacing the tool with a new Content Library API.

    While AI helps them flag content, Kintsurashvili says, they still have to go through it themselves because there’s some mismatching as this tool hasn’t been trained in the Georgian language. “It's saving us time when tools are working properly, but some tools are still in testing mode, and AI fact-checking tools are not very efficient yet, but I’m sure they will improve their efficiency in the future,” she says.
     
    • Informative Informative x 1
  2. slocala

    slocala VIP Member

    3,295
    783
    2,028
    Jan 11, 2009
    It might be great to point out contradictory statements made versus past statements.

    Personally, I don’t trust any LLM can and should be some library representing the culmination of societal knowledge and the arbiter of truth. Society grows as ideas are translated into discovery and challenge conventional wisdom. Will LLMS squash ideas or provide “Minority Reports”?

    We are careening down the path of where no human is an expert and trusted. That is a collapse of society.
     
  3. demosthenes

    demosthenes Premium Member

    8,904
    1,083
    3,218
    Apr 3, 2007
    There’s a long ways to go here because to date LLMs / AI have a lot of false information and when trying to detect AI usage false positives. I think it could be useful to detect groups work together in a dispersed manner. We have used AI to help with discovery in a similar manner.
     
    • Informative Informative x 1