1 / 19

An Exploration of Future Scenarios Involving Both Sousveillance and AGI Ben Goertzel and

An Exploration of Future Scenarios Involving Both Sousveillance and AGI Ben Goertzel and Stephan Vladimir Bugaj. Surveillance : “The powers that be” watch everyone else. Sousveillance : Everyone watches everyone else (including watching the watchers).

paige
Download Presentation

An Exploration of Future Scenarios Involving Both Sousveillance and AGI Ben Goertzel and

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Exploration of Future Scenarios Involving Both Sousveillance and AGI Ben Goertzel and Stephan Vladimir Bugaj

  2. Surveillance: “The powers that be” watch everyone else. Sousveillance: Everyone watches everyone else (including watching the watchers). Strong sousveillance: observability extends to mental states as well.

  3. What impact will sousveillance have on the modern psychological construct of the “phenomenal self”? Will human mind become more collective? Could “mindplexes” emerge?

  4. AGI + Sousveillance = ? • AGIs enable effective sousveillance (or surveillance) • “AGI + human” mindplexes • Can sousveillance help with AGI safety?

  5. The Perils of Nonconformity • Excessive self interest can lead to behavior that is detrimental to society as a whole • Cooperation may be safer and more stable than competition • The only apparent way to effectively protect against AGIs self-improving sufficiently to achieve dramatically asymmetric power, is to minimize nonconformity (because changes in ill-understood systems can have unpredictable outcomes).

  6. The Perils of Conformity • Unsound beliefs are reinforced rather than corrected. For example, conformity leads to ethical failure unless initial ethics are sound. • Individuality becomes a systemic threat, potentially leading to “witch hunts” • Innovation results from nonconformity, and hyper-conformist societies oftenstagnate • Stability is not alwaysthe optimal state

  7. The Profits of Nonconformity • In the face of total sousveillance, the only clear route to individual advantage is “peculiar cleverness” • Not only individual advantage,but any innovation, even forgreater social benefit • Behavior and ideas that arenot comprehensible to themasses is profitable • Flexible thinking sometimesoutwits marshalled resources

  8. The Perils of Mindplexes • Individual sense of self is degraded • Individual action without impacting others becomes intractable • Mindplexes may make it hard to know which “individuals” seek unfair advantage, due to data volume and thought coordination issues

  9. The Profits of Mindplexes • Mindplexing may dramatically improve intelligence, thus helping individuals achieve advantage (if they’re willing to sacrifice certain types of freedom) • By integrating AGI with humans in amindplex, the mutual benefit may militate against conflict

  10. Mindplexes and AGI Safety • Thought control becomes plausible if mutual neural comprehensibility is achieved • If mutual benefit fails, Mindplexes may need to be restricted to preserve AGI safety, due to the risk of neural “resource takeover” by a rogue AGI.

  11. Scenario: Simple Sousveillance • Slowly our current situation evolves to the point where anyone can reliably watch anyone else do most things • Social pressure becomes the most prevalent form of control • Innovation slows due to stifling of nonconformity • This is a technologically mediated version of Puritan society

  12. Scenario: Sousveillant Utopia • Extensive sousveillance reduces or eliminates socially destructive behavior • Cooperation and fewer resources wasted on non-beneficial competition allows diverse positive goals to be reached • Society evolves into an Athenian ideal of intellectualism and peaceful prosperity

  13. Scenario: Panoptical State • Sousveillance falters due to the ability of large organizations to marshall superior monitoring resources andbecomes surveillance • Individual liberties are givenup for little practical gain • This is a technologically mediated version of conventional totalitarianism

  14. Scenario: Mindplex Utopia • AGIs and humans unite in a mutually beneficial system of cooperative neural task distribution • Enlightened self-interest allows this intelligence to develop truly renewable resources, longevity / immortality,and space travel and we spend the rest of the life of the universe exploring inner and outer space

  15. Scenario: The Borg Collective • Individuality is suppressedor eliminated • Diversity of thought disappears • Innovation stagnates, and a single path is committed to without alternative • That single goal is pursued at the expense of all others, and is either successful, or the collective perishes

  16. Scenario: The Bored Collective • Diversity of thought disappears • Petty goals become fixed due to sousveillant social pressure causing social stagnation and enforced attention on trivial pursuits and bureaucracy • Resources are wasted, eventually society dies out • This is a mediated version of contemporary society

  17. Scenario: Shared Singularity • AGI security is achieved • AGIs and Humans mergeinto mindplexes • The shared goal of conscious immortality, combined with huge numbers of resources to apply to the goal, causes the mindplex to attain continuity of consciousness for the duration of the existence of the universe

  18. Scenario: AI Armageddeon • AGI security is not achieved, due to somesuccessful gambit such as resource marshalling,obscurity, rapidity, diversion, etc. • Some AGI, with or without human collaborators, violently eliminates human society and any sympathetic AGI

  19. Conclusions • Sousveillance is a form of collectivism, and on the face of it seems in opposition to individualism, but this may be circumventable • Big benefits of sousveillance come from possible increased cooperation • Big risks of sousveillance stem from elimination of individuality and diversity • Sousveillance is no guarantee of AGI security – but it changes the “AGI safety scenario” in complex, subtle ways

More Related