In an open letter published on Tuesday, more than 1,370 signatories—including business founders, CEOs and academics from various institutions including the University of Oxford—said they wanted to “counter ‘A.I. doom.’”
“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” they insisted.
And how can you be certain, if the virus generation its an actual possibility, there won’t be another AI already made to combat such viruses?
Sure, I’m not certain at all, maybe, but are you certain enough to bet your life on it?
I mean, even that hypothetical scenario is very scary, though lol