110 likes | 127 Views
This article discusses the challenges in achieving AGI, the lack of consensus among researchers, and proposes a framework of AGI axioms to facilitate short-term research focus and a unified roadmap.
E N D
Working toward pragmatic convergence: AGI Roadmap and a Unified Roadmap ItamarArel, Machine Intelligence Lab (http://mil.engr.utk.edu) The University of Tennessee
Reality Check • Despite 60 years of hard work no AGI • Funding situation is dire • Reputation is poor • Bad news: things seem to continue along the same trajectory
Can convergence happen? • AGI researchers diverge on disparate trajectories • No consensus on what AGI really is • Claim: we need a unified view of overarching goals • Proposition: an initial framework to facilitate consensus of short-term research focus
The case for AGI Axioms • Def: core functional attributes without which an AGI system cannot be considered one • Necessary but not sufficient set • Advantages • Help unify terminology • Promote an eventual roadmap • Discard non-AGI propositions
Axiom #1: Observability • AGI system must have the ability to continuously receive observations from its environment • Appears obvious, but critical • The particular nature of observations irrelevant • May be partial with respect to environment state
Axiom #2: Actuation Capability • Ability to impact environment in some desired manner • Without this – no control loop – no AGI • “Thinking” by itself is insufficient • Implies physical actuators (or virtual ones)
Axiom #3: Process High-Dimensional Data • Mammal brains concurrently exposed to high-dimensional sensory information • A system with limitation on that is not AGI • Typically multi-modal sensory information • Sensory data fusion will take place
Axiom #4: Capturing Spatiotemporal Dependencies • Core human brain capability • Representing wide time scale is critical • Anticipating events as consequence of other events • Tied to pattern recognition
Axiom #5: Utility Function • Existence of functional goal • Drives action selection • Not necessary reinforcement learning like • Intrinsic feedback in addition to external • Credit assignment problem – “strategic thinking”
Avoiding the 10 IQ Fallacy • AGI is hard to demonstrate on small-scale problems • Narrow “AI” can always step in (sometimes do better) • Axioms should reflect “true” AGI attributes • … more during the AGI Roadmap discussion