AI experts are increasingly afraid of what they’re creating AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year. **** The systems we’re designing are increasingly powerful and increasingly general, with many tech companies explicitly naming their target as artificial general intelligence (AGI) — systems that can do everything a human can do. But creating something smarter than us, which may have the ability to deceive and mislead us — and then just hoping it doesn’t want to hurt us — is a terrible plan. We need to design systems whose internals we understand and whose goals we are able to shape to be safe ones. However, we currently don’t understand the systems we’re building well enough to know if we’ve designed them safely before it’s too late. **** But while divides remain over what to expect from AI — and even many leading experts are highly uncertain — there’s a growing consensus that things could go really, really badly. In a summer 2022 survey of machine learning researchers, the median respondent thought that AI was more likely to be good than bad but had a genuine risk of being catastrophic. Forty-eight percent of respondents said they thought there was a 10 percent or greater chance that the effects of AI would be “extremely bad (e.g., human extinction).”
I’ve read more than one report that AI is Elon Musk’s biggest fear. Hopefully no one ever creates AI to clean up the environment. We would be their first target.
The fear has always been the believed runaway effect through AI advancement. From ANI (narrow: like Siri) to AGI (general: something approaching human capabilities) would be somewhat slow, but once we reached AGI, the acceleration to ASI (super: self aware and smarter than human) would be alarmingly, and maybe uncontrollably, quick. The challenge has always been to make sure we're approaching this with the proper precautions and understandings to create an ASI in a desirable form. Human history tells me we should be concerned.
If it becomes self replicating, can adapt, and becomes completely independent of humans (resources, energy sources etc); look out.
You mean, like a species whose entire history was driven in large part by an unending search for free / cheap labor eventually becomes enslaved by labor of their own making? Dystopia 101. Alanis should add a verse.
“They Look and Feel Human. Some are programmed to think they are Human.There are many copies. And they have a Plan." The revised Battle Star Galactica is one of the best shows ever made.