Not hardly. It sounds pretty legit. And scary. “On Tuesday, hundreds of top AI scientists, researchers, and others — including OpenAI chief executive Sam Altman and Google DeepMind chief executive Demis Hassabis — again voiced deep concern for the future of humanity, signing a one-sentence open letter to the public that aimed to put the risks the rapidly advancing technology carries with it in unmistakable terms. ““Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the letter, signed by many of the industry’s most respected figures. “It doesn’t get more straightforward and urgent than that. These industry leaders are quite literally warning that the impending AI revolution should be taken as seriously as the threat of nuclear war. They are pleading for policymakers to erect some guardrails and establish baseline regulations to defang the primitive technology before it is too late.” Experts are warning AI could lead to human extinction. Are we taking it seriously enough? | CNN Business
With technological improvements comes good and bad. Take the internet for example... Lots of good, lots of bad. AI will be the same, but more extreme. WAY more extreme.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Well, hence the ideas within Asimov’s writings about the existence of a new sentient race and the inherent slavery and subjugation to humans.
Conspiracy theory. How and why would intelligent AI "destory us." It seems the risk is basically things humans might do to each other, such as delegating the decision of firing off nukes to some computer program without adequate safeguards.
Murder, destruction, and conquest are only the default actions of humans, not AI. Humanity is 50Xs more likely to destroy itself. Unless AI saves us from ourselves by eradicating us, wait, I get it now.