AI Nightmare Scenario

Machines have no volition nor sentience, hence, emerging technologies such as AI needs careful planning and policy measures of containment and mitigation methods. Stuart J. Russell used the analogy of energy fusion and containment efforts to explain that superintelligence machines need risk mitigation methods[1].
If technology is advancing to or laying groundwork for superintelligence machines, then humanity must confront and try to resolve societal issues that we face today before creating a technology that may make mankind obsolete. If we are speaking about programing a value system to superintelligence machine, we must ask who is programing those values, how is that entity (group of people) implementing those values? Bostrom believes that researchers should be thinking on “How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them?[2]” The question is not if machines can be benevolent or maleficent. The question is humanity, the ones that construct, are benevolent or maleficent. How can we construct a superintelligence machine with a set of values if humanity can’t even resolve questions such universalization of human rights?  The world is facing so many human rights issues such as gender equality, social inequality, extreme poverty, etc. How can we expect a positive outcome of implementation of technologies such as AI in the current state humanity find itself in.  Elon Musk, Stephen Hawking are among others who are harsh critics of AI. “They are among dozens of thought leaders who signed a letter harshly condemning governments’ increasing reliance on AI for military use.[3]” I agree, international humanitarian law has deteriorated worldwide. Many nations that are participants in conflicts, whether internal or international, are continuously committing war crimes. Who, or what nation will implement a value system into their autonomous weapons? 
When reading Dave Johnson article listing the 13 dark technology scenarios I kept thinking it is not about the technology itself being maleficent, it is the person, group or nation with the ability to manipulate that technology for evil or not being completely aware or informed how to mitigate the technology when using it. For example, swarm drones, drones with GIS/AI capabilities; terrorist organizations as well as nations constantly use them in attacking their enemies. I have seen modification of commercial UAVs (Unmanned Arial Vehicles) where terrorist organizations make modifications to them making them into deadly weapons. 

Additionally, discrimination issues with technology such as using AI for human hiring process.
Moreover, the reason emerging technology learns discrimination is because our society discriminates. We are living in social injustice systems to include an industry where most of the workforce only makes a small representation of the world’s population. The technology industry is predominantly a male working force and has inclusivity issues. Hence, who is putting the value system when implementing AI? It does not represent society as a whole.
An example of discrimination AI can make when implementing it into the workforce is of two kids who made the same mistakes as teenagers, but one is born into wealth and one into poverty. One was able to buy himself out of punishment and the other one didn’t. When they finally get to the workforce their starting point is way off from one another. If we are letting an intelligent machine make the call of who will get a job or success probability between both individuals, the machine will just perpetuate inequality. Superintelligence machines will only intensify the state humanity finds themselves in. 
I agree with Shane Legg when he stated, “I think human extinction will probably occur, and technology will likely play a part in this.[4]” I also agree with Elon Musk that emerging technology such as AI should have national and international regulatory oversight[5]. Additionally, the global community needs to lay the regulatory/containment methods now and it won’t be easy. 


[1] Paul Ford, “Our Fear of Artificial Intelligence”.
[2] Paul Ford, “Our Fear of Artificial Intelligence”.
[3] Commentary: What To Do Against the ‘Nightmare Scenario’?
[4] Maureen Dowd, “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE”.

[5] Maureen Dowd, “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE”.




Ingrid Weishaupt is the cofounder and Vice President of Weishaupt & Company, a management consulting firm. Weishaupt & Company helps companies implement Artificial Intelligence.Ingrid advises on the legal and policy aspects of Artificial Intelligence transformation. She can be reached at ingrid@weishauptco.com 







Comments