Welcome home, fellow Gator.

The Gator Nation's oldest and most active insider community
Join today!
  1. Hi there... Can you please quickly check to make sure your email address is up to date here? Just in case we need to reach out to you or you lose your password. Muchero thanks!

Nobel Laureate’s AI warning

Discussion in 'Too Hot for Swamp Gas' started by Trickster, Oct 15, 2024.

  1. Trickster

    Trickster VIP Member

    10,117
    2,474
    3,233
    Sep 20, 2014
    “Last week, the Nobel Prize in Physics was awarded to two scientists — John J. Hopfield and Geoffrey E. Hinton — for research on computers that led to current AI technology like ChatGPT.

    Hinton is often referred to as the “godfather” of artificial intelligence.

    He is also outspoken about the serious risks AI poses to humanity — even leaving a high-profile role at Google so he could openly discuss his concerns.

    Here’s some of what Hinton has said about the dangers of artificial intelligence:
    • Artificial intelligence technology could “amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society.”
    • AI makes “it much easier for authoritarian governments to manipulate the electorate with fake news.” We need “legislation making it illegal to produce or share fake images or videos unless they are marked as fake.” (We already do this with currency.) But in the U.S., “one of the major political parties has tied its fate to the successful propagation of fake news.”
    • Combat robots powered by artificial intelligence could “allow rich countries to invade poor ones more easily.” And military systems that use AI to automatically set and kill targets could make wars impossible to control.
    There is no one more qualified to testify about the ways artificial intelligence could be misused or get out of control. And of course Hinton is hardly the only expert warning about the threat of artificial intelligence — many of the world’s leading AI technologists are expressing concern.

    As a society, we need to put strong guardrails on artificial intelligence and on the for-profit companies that are already locked in an AI arms race. But it’s no secret that elected officials are often too slow in keeping up with evolving technology — and that they are very much influenced by Big Tech.

    That’s why it is essential for the American people to demand that Congress do more to grasp the risks of artificial intelligence — and to address those risks with comprehensive legislation and regulations that create guardrails to protect us — before it’s too late.


    Tell Congress:

    No less an expert than Nobel laureate Geoffrey Hinton — the “godfather of AI” — has been sounding the alarm about the profound risks artificial intelligence poses. Those risks are so serious, and the systems are evolving so rapidly — in part because major corporations are already locked in an AI arms race — that there isn’t time for the government to do its usual bumbling with new technology. The American people need you to fully grasp the threat of AI and to pass comprehensive legislation to protect us.

    Last week, the Nobel Prize in Physics was awarded to two scientists — John J. Hopfield and Geoffrey E. Hinton — for research on computers that led to current AI technology like ChatGPT.

    Hinton is often referred to as the “godfather” of artificial intelligence.

    He is also outspoken about the serious risks AI poses to humanity — even leaving a high-profile role at Google so he could openly discuss his concerns.

    Here’s some of what Hinton has said about the dangers of artificial intelligence:
    • Artificial intelligence technology could “amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society.”
    • AI makes “it much easier for authoritarian governments to manipulate the electorate with fake news.” We need “legislation making it illegal to produce or share fake images or videos unless they are marked as fake.” (We already do this with currency.) But in the U.S., “one of the major political parties has tied its fate to the successful propagation of fake news.”
    • Combat robots powered by artificial intelligence could “allow rich countries to invade poor ones more easily.” And military systems that use AI to automatically set and kill targets could make wars impossible to control.
    There is no one more qualified to testify about the ways artificial intelligence could be misused or get out of control. And of course Hinton is hardly the only expert warning about the threat of artificial intelligence — many of the world’s leading AI technologists are expressing concern.

    As a society, we need to put strong guardrails on artificial intelligence and on the for-profit companies that are already locked in an AI arms race. But it’s no secret that elected officials are often too slow in keeping up with evolving technology — and that they are very much influenced by Big Tech.

    That’s why it is essential for the American people to demand that Congress do more to grasp the risks of artificial intelligence — and to address those risks with comprehensive legislation and regulations that create guardrails to protect us — before it’s too late.


    Tell Congress:

    No less an expert than Nobel laureate Geoffrey Hinton — the “godfather of AI” — has been sounding the alarm about the profound risks artificial intelligence poses. Those risks are so serious, and the systems are evolving so rapidly — in part because major corporations are already locked in an AI arms race — that there isn’t time for the government to do its usual bumbling with new technology. The American people need you to fully grasp the threat of AI and to pass comprehensive legislation to protect us.

    president@citizen.org

     
    • Fistbump/Thanks! Fistbump/Thanks! x 1
  2. wgbgator

    wgbgator Premium Member

    30,267
    1,910
    2,218
    Apr 19, 2007
    I respect his concerns, but this kind of rhetoric mainly serves to fuel investment in AI and functions as a hype man. AI is a scam, making it some scary thing with endless possibilities for social control does a lot of work for the people who are making money from it. AI sucks, it mainly just generates social media and digital content slop. Its just crypto for slightly smarter people.
     
    • Disagree Bacon! Disagree Bacon! x 1
    • Come On Man Come On Man x 1
  3. slocala

    slocala VIP Member

    3,302
    784
    2,028
    Jan 11, 2009
    I have my concerns — not all actors will be profit motivated. Quantum computing + AI that can learn for itself can play out all the scenarios faster than a human and make an educated conclusion. The issue ultimately is runaway code and AI that is not controlled in a broad based digital ecosystem. Malicious instructions that are self evolving could lay dormant and wreak havoc disassociated from the deployment.
     
  4. docspor

    docspor GC Hall of Fame

    5,875
    1,860
    3,078
    Nov 30, 2010
    my thoughts until Sat morning 2 weeks ago when I discovered notebooklm. I linked my buddies faculty webpage & it made a delightful & insightful podcast of his entire career in about 5 mins. Next morning I wrote a fake history of my real friends' band, saved it as a .pdf & drug it into notebooklm & it made a killer podcast in about 5 mins. The next week, we had a job candidate coming to present his research. I dropped his paper in notebooklm, it made a 13 min podcast. I took a walk on a beautiful day & listened.
     
    • Informative Informative x 2
  5. docspor

    docspor GC Hall of Fame

    5,875
    1,860
    3,078
    Nov 30, 2010
    this is your post podcastified...glonuts was the band name...i did not change the title of the doc

    https://notebooklm.google.com/notebook/48d3d0b7-e5ab-46a0-904f-1d0a155baf59/audio
     
    • Informative Informative x 1
    • Creative Creative x 1
  6. gator_lawyer

    gator_lawyer VIP Member

    18,219
    6,169
    3,213
    Oct 30, 2017
    Unlikely to survive First Amendment scrutiny with the courts we have.
     
  7. l_boy

    l_boy 5500

    13,024
    1,742
    3,268
    Jan 6, 2009
    You are probably right, but does artificially generated speech generated by a computer fall under the realm of rights of individual citizens?
     
  8. gator_lawyer

    gator_lawyer VIP Member

    18,219
    6,169
    3,213
    Oct 30, 2017
    Somebody is making the video.
     
  9. l_boy

    l_boy 5500

    13,024
    1,742
    3,268
    Jan 6, 2009
  10. wgbgator

    wgbgator Premium Member

    30,267
    1,910
    2,218
    Apr 19, 2007

    This is the other issue … super heating the planet just so somebody can post an ai generated black nazi on twitter.
     
  11. slocala

    slocala VIP Member

    3,302
    784
    2,028
    Jan 11, 2009
    Right now data centers use about 5% of our US energy demands. It is projected to increase to 25-30%. Computing power energy demands cannot be achieved with traditional O&G only — “drill baby drill” is for the Luddites. Invest in small scale nuclear now or we lose the economic war of the future.
     
    • Winner Winner x 1
  12. l_boy

    l_boy 5500

    13,024
    1,742
    3,268
    Jan 6, 2009
    the tragedy is after about 15 minutes I accidently closed the browser, so I warmed the planet 0.01 degrees for no good reason. Alas, the AI podcast of the Too Hot Billy Napier thread would have been epic.
     
    • Funny Funny x 1
  13. thomadm

    thomadm VIP Member

    2,973
    710
    2,088
    Apr 9, 2007
    Not really, its software generated content. Someone "owns" the data rights to the software AI, so there needs to be some laws in that arena to deal with it.
     
  14. gator_lawyer

    gator_lawyer VIP Member

    18,219
    6,169
    3,213
    Oct 30, 2017
    The software isn't generating itself. A person makes the requests and writes the prompts.
     
  15. thomadm

    thomadm VIP Member

    2,973
    710
    2,088
    Apr 9, 2007
    Some of it, but a lot of software now can be generated by AI.

    That's why I said someone needs to be able to own data rights. It's something that will be a huge problem in litigation.
     
  16. WESGATORS

    WESGATORS Moderator VIP Member

    22,627
    1,398
    2,008
    Apr 3, 2007
    Sounds simple, how would we begin to define "fake?" Now take that definition and apply it to artwork. What if somebody takes a real image and presents it as something else?

    I went to the beach today...

    [​IMG]

    This could both be true and misleading (the picture is the bottom of a car that needs to be repaired); maybe that's the car I took to the beach and I'm not trying to present the image as what I saw when looking out from the beach.

    On second thought, maybe this would be the end of commercials or at least more honest ones:

    [​IMG]

    Go GATORS!
    ,WESGATORS
     
  17. mrhansduck

    mrhansduck GC Hall of Fame

    4,867
    1,003
    1,788
    Nov 23, 2021
  18. LimeyGator

    LimeyGator Official Brexit Reporter!

    I used ChatGPT to summarise your post:

    AI pioneer Geoffrey Hinton, a Nobel laureate, has raised serious concerns about the risks of AI, including its potential to amplify social injustice, manipulate elections, and escalate conflicts. He urges immediate action from Congress to regulate AI and protect society from its potential harms.

    Not all bad.
     
  19. docspor

    docspor GC Hall of Fame

    5,875
    1,860
    3,078
    Nov 30, 2010
    Last edited: Oct 16, 2024
    • Like Like x 1
    • Funny Funny x 1
  20. docspor

    docspor GC Hall of Fame

    5,875
    1,860
    3,078
    Nov 30, 2010
    it's currently free.
     
    • Fistbump/Thanks! Fistbump/Thanks! x 1