240 likes | 358 Views
Mark R. Waser MWaser@DigitalWisdomInstitute.org http://www.DigitalWisdomInstitute.org. Crossing the Line. Deutsch . Intelligence depends on the emergence of certain high-level structures and dynamics across a system’s whole knowledge base;
E N D
Mark R. Waser MWaser@DigitalWisdomInstitute.org http://www.DigitalWisdomInstitute.org Crossing the Line
Intelligence depends on the emergence of certain high-level structures and dynamics across a system’s whole knowledge base; • We have not discovered any one algorithm or approach capable of yielding the emergence of these structures; • Achieving the emergence of these structures within a system formed by integrating a number of different AI algorithms and structures is tricky. It requires careful attention to the manner in which these algorithms and structures are integrated; and so far, the integration has not been done in the correct way. Integration Bottleneck
In this view, the main missing ingredient in AGI so far is “cognitive synergy”: the fitting-together of different intelligent components into an appropriate cognitive architecture, in such a way that the components richly and dynamically support and assist each other, interrelating very closely in a similar manner to the components of the brain or body and thus giving rise to appropriate emergent structures and dynamics. Cognitive Synergy
Making such diverse components work together in a truly synergetic and cooperative way is a tall order, yet my own suspicion is that this — rather than some particular algorithm, structure or architectural principle — is the “secret sauce” needed to create human-level AGI based on technologies available today. • Achieving this sort of cognitive-synergetic integration of AGI components is the focus of the OpenCog AGI project that I co-founded several years ago. We’re a long way from human adult level AGI yet, but we have a detailed design and codebase and roadmap for getting there. Wish us luck! • And if you’re curious to learn more about what is going on in the AGI field today, I’d encourage you to come to the AGI-12 conference at Oxford, December 8–11, 2012. OpenCog
Fundamentally a “blackboard architecture” • Has a visionary high-level design but does NOT have a detailed low-level specification • Try to get even a listing of the fundamental AtomStore link types • Multiple roadmaps but NONE with a single detailed coherent path • Fuzzy conjectures and intuitions (many good ideas but no scientific rigor) • Fifteen years of coding but (with the exception of Moshe Locke’s work) virtually no progress • Still expecting “a miracle happens here” OpenCog
Waser, M. What is Artificial General Intelligence? Clarifying the Goal for Engineering and Evaluation. In B Goertzel, P Hitzler, M Hutter (e) Proceedings of the Second Conference on Artificial General Intelligence. Atlantis Press. 2009. • http://becominggaia.wordpress.com/papers/ • Where exactly are we headed. • Blind men and the elephant • What we want vs. what we think we want • How do we even know when we have succeeded? • Is it like pornography – “we know it when we see it”? • Where is the line? What is AI/AGI?
The finish line? • A line in the sand? • A distinction between two possibilities? • A sharp, narrow line or a fuzzy boundary? • An easy crossing or one that is fiercely defended? • Is it a phase change (where everything changes)? • Can it be uncrossed? What type of line?
Safe | Dangerous • Advantageous | Useless • Intelligent | Not Intelligent • Sentient | Unaware • Entity AI | Tool AI • Entity AI | “RPOP” • Companion | Slave • Person | Not Worthy • Right/Moral | Wrong/Immoral Lines in the sand…
The ability to achieve complex goals (or solve complex problems) in a wide variety of complex environments and complex circumstances • Demarcation line or spectrum? • Is there a point where increased intelligence becomes *qualitatively* different? • Does the mere possession of (sufficient) intelligence grant personhood, moral agency, or moral patienthood? • Should intelligence above a certain level be regulated or banned? Intelligence
The ability to feel, perceive • or be conscious • or have subjective experiences (“qualia”) • Demarcation line or spectrum? • Is there a point where increased perception becomes *qualitatively* different? • Does the mere possession of (sufficient) perceptiveness grant personhood, moral agency, or moral patienthood? • Should sentience above a certain level be regulated or banned? • What about below a certain level? Sentience
Is brittle because it does not sense/perceive anything beyond what it is programmed to perceive • Prediction requires perception and distinctions • Increased flexibility and generality requires either increased perception, additional distinctions based upon current perceptions or (more likely) both • This is the grounding problem (or the initialization problem or the ultimate machine learning problem) • How can additional perceptions and distinctions be quickly and easily added (and debugged and extrapolated/extended)? • It would be particularly helpful if there was a protocol for doing this or some helpful automation • But that protocol or automation will rely on accurate perception of the current state of the extended perception/distinction system Narrow AI
The quality or state (not ability) of being aware of something (i.e. perceiving it) • Demarcation line or spectrum? • Is there a point where increased awareness (or perception/sentience) becomes *qualitatively* different? • Does the possession of sufficient awareness (or perception/sentience) grant personhood, moral agency, or moral patienthood? • Should consciousness above a certain level be regulated or banned? • What about below a certain level? Consciousness
The quality or state (not ability) of being similar to all other homo sapiens • Demarcation line or spectrum? • Is there a point where increased human-ness becomes *qualitatively* different? • Does the possession of sufficient human-ness grant personhood, moral agency, or moral patienthood? • Should human-ness above a certain level be regulated or banned? • What about between certain levels? Human-ness (sic)
The ability to act with appropriate judgment • wisdom • the property/likelihood of acting with appropriate judgment • Demarcation line or spectrum? • Is there a point where increased sapience (wisdom) becomes *qualitatively* different? • Does the possession of sufficient sapience (wisdom) grant personhood, moral agency, or moral patienthood? • Is banning or regulating sapience like banning or regulating sainthood? Sapience
The distinction between things that are good (or right) and bad (or wrong) • Wikipedia redirects “appropriateness” to “morality” • Demarcation line or spectrum? • Is there a point where increased appropriateness becomes *qualitatively* different? • Does the possession of sufficient appropriateness grant moral standing? Appropriateness
The ability to perform work/change the world • Demarcation line or spectrum? • Is there a point where increased power becomes *qualitatively* different? • Does the mere possession of sufficient power grant personhood, moral agency, or moral patienthood? • Should power above a certain level be regulated or banned? • Should this apply to corporations and people as well? Power
The ability to effect negative change • The property/likelihood of effecting negative change • Demarcation line or spectrum? • Is there a point where increased danger becomes *qualitatively* different? • Does the possession of sufficient dangerousness grant personhood, moral agency, or moral patienthood? • Should danger above a certain level be regulated or banned? • Should this apply to corporations and people as well? Danger
Holden’s Tool AI is a non-self-modifying, passive planning Oracle like Google Maps • How can it possibly be dangerous if it doesn’t take any actions? • Is the “human-in-the-loop” truly informed? • Do you want the “human-in-the-loop” to actually have the capabilities that the Oracle enables? • And this doesn’t even consider whether it is even possible to build a smarter-than-human Oracle without it being self-improving (or the fact that someone else will build a self-improving version first) Tool AI vs. Entity AI
A “Really Powerful Optimization Process” (RPOP) is (theoretically) striving for *your* goals • Therefore, the reasoning goes, it should not be dangerous (i.e. effect changes contrary to your goals) except by accident • This is not a panacea • Goal misunderstanding • Unexpected consequences • Goal changes • Inconsistent goals • Fundamentally, the entire concept of a self-modifying non-entity is internally inconsistent • Further, the goal of being a permanent subordinate is an inconsistent goal • think of keeping an ally as a tool rather than an entity “RPOP” vs. Entity AI
Seed Artificial Sapience • Sentience should always evolve into sapience • but humans could be long gone before then • Does an AS need to know humanity’s ultimate goal (CEV) or simply how to act? • Or, is acting appropriately/morally the only necessary (and therefore, primary) goal? • For EVERYONE! Seed AI
Actually, a new beginning • A complete sapience seed • Must be able to grow sapience given the environment it will be in • But also an environment where it can grow • So how do we get there (first)? The Finish Line
256 64 pass pass pass pass pass pass 1 1 2 2 2 1 stop stop stop stop stop stop 32 128 64 16 4 1 2 8 16 4 8 32 Centipede Game