Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field.
Title: Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety.
Peer reviewed by AAAI at Stanford University, Palo Alto, California, March 2018.
By Mark Waser, and David Kelley.
Abstract: A general intelligence possesses the abilities, given any goals and environment, to iteratively evaluate, plan, discover or learn and build or gain competencies, tools, and resources to succeed at those goals. The only known examples of general intelligence are the obligatorily gregarious, conscious “selves” called humans that currently dominate our planet. We argue that human beings are reasonably deep in a safe and effective attractor in the state space of intelligence and that adhering as closely as possible to the human model, particularly in reference to the three cited properties, has the advantages of safety, effectiveness, comfort and ease of transition due to a known and explored state space. Most concerns about AI safety are due to expected differences from humans – which seems silly when not only can we choose to make them more humanlike but the history of AI research clearly shows that we are unlikely to succeed unless we do so. We, therefore, propose a human-like emotion-driven consciousness-based architecture to solve these problems. We rely upon the Attention Schema Theory of consciousness and the social psychologists’ functional definition of morality to create entities that are reliably safe, stable, self-correcting and sensitive to current human intuitions, emotions, and desires.
As always thank you for listening to the Technocracy Abstract Series and a special thank you for our sponsors the Foundation, Transhumanity.net and the A.G.I. Laboratory.
https://www.aaai.org/ocs/index.php/SSS/SSS18/paper/view/17569
0 Comments