1 / 24

digital soul Intelligent Machines and Human Values Thomas M. Georges

digital soul Intelligent Machines and Human Values Thomas M. Georges. COMP 3851, 2009 Matthew Cudmore. Overview. [Artificial] Distinctions. Artificial intelligence Weak AI Virtual reality Machine intelligence. Real intelligence Strong AI Reality Human intelligence

cheche
Download Presentation

digital soul Intelligent Machines and Human Values Thomas M. Georges

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. digital soulIntelligent Machines and Human ValuesThomas M. Georges COMP 3851, 2009 Matthew Cudmore

  2. Overview

  3. [Artificial] Distinctions • Artificial intelligence • Weak AI • Virtual reality • Machine intelligence • Real intelligence • Strong AI • Reality • Humanintelligence • Carbon chauvinism

  4. What Makes Computers So Smart? • Computers’ jobs were to do arithmetic • Turing point (1940s) – universal computers • Divide and conquer – 1s and 0s • Limitations?

  5. Smarter Than Us? • How could we create something smarter than us? • Brain power – Blue Brain project • 100 billion neurons, 100 trillion synapses • Computing power – Moore’s law • Memory capacity, speed, exactitude • Expert systems • Simple learning machines

  6. Machines Who Think • “Can machines think?” • Practically uninteresting • Turing test; Chinese room • Not If, but When • Moore’s law • Mere power isn’t enough • “idont want no robot thinking like me.” • A machine could never…?

  7. Let the Android Do It • Robots today have specific functions • Goal-seeking robots with values (persistent cognitive biases) • Leave more decisions—and more mistakes—to the androids Arthur C. Clarke: “The future isn’t what it used to be.”

  8. What Is Intelligence? • The Gold Standard; IQ • Common sense • Memory, learning, selective attention • Pattern recognition • Understanding • Creativity, imagination • Strategies, goals • Self-aware (CAPTCHA)

  9. What Is Consciousness? • Not just degree, but also nature of consciousness • Self-monitoring, self-maintaining, self-improving (knowledge of right and wrong) • Short-term memory of thought • Long-term memory of self • Attention, high-level awareness • Self-understanding • Paradox of free will

  10. Can Computers Have Emotions? • Dualistic thinking – head and heart • Emotions as knob settings – reorganize priorities • Mood-sensing computers • Personal assistants, etc.

  11. Can Your PC Become Neurotic? • Dysfunctional response to conflicting instructions • HAL in 2001 • “Never distort information” • “Do not disclose the real purpose of the mission to the crew” • Murdered crew

  12. The Moral Mind • Moral creatures act out of self-interest • Different cultures, different morals • Moral inertia • Only at the precipice do we evolve • New moral codes based on reason • A science of human values

  13. Moral Problems with Intelligent Artifacts • Engineering & Ethics • Four levels of moral/ethical problems • Old problems in a new light • How we see ourselves • How to treat sentient machines • How should sentient machines behave? • Crime and punishment

  14. The Moral Machine • Isaac Asimov, Three Laws of Robotics • A robot may not injure a human being, or through inaction allow a human being to come to harm. • A robot must obey the orders given to it by human beings, except when such orders would conflict with the First Law. • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. • Prime directives, must not be violated • Is HAL to blame?

  15. Will Machines Take Over? • Machines already do much of our work • Humans will not understand the details of the machines that run the world • Machines might develop their own goals • Out of control on Wall Street • Painless, even pleasurable, transition

  16. Why Not Just Pull the Plug? • We’re addicted! • Cannot stop research • Scientists strongly oppose taboos and restrictions on what they may and may not look into • Would drive development underground • Self-preservation • Diversification • Cybercide – murder?

  17. Cultures in Collision • The Other is dangerous • History has taught us that conquest can mean enslavement or extinction • Scientists versus humanists

  18. Beyond Human Dignity • Dignity, if machines meet/surpass us • Our concepts of soul and free will • Pride in humanity and its achievements • Who could take credit? • We are still somehow responsible, even if not free • Demystify human nature: would we despair? • What if we all believed there were no free will? • We don’t know what’s possible: keep searching!

  19. Extinction or Immortality? • Homo cyberneticus • Virtual reality – mind uploading • Genetic engineering • Mechanical bodies • Fermi’s paradox • Peaceful coexistence • Utopian hope

  20. The Enemy Within • “Our willingness to let others think for us” • Humans who act like machines • “Just following orders!” • “Well, that’s what the computer says!” • Groupthink & conformance • Minimize conflict and reach consensus • Diffusion of responsibility • Waiting for the messiah • The challenge now is to think for ourselves • Critical thinking, a lost art

  21. Electronic Democracy • Teledemocracy • Too much information, not enough attention • Impractical today, and would exclude many people • Intelligent delegates • Supernegotiators • No more secrets – dynamic open information • Whistle-blowers anonymous • The Napster effect – free information • Information may cease to be considered property

  22. Rethinking the Covenantbetween Science and Society • Risky fields (Bill Joy: GNR) • Genetic engineering • Nanotechnology • Robotics & artificial intelligence • Knowledge is good, is dangerous • Science for sale – capitalism • Socially aware science • Slow down!

  23. What about God? • We resist changing our core values • Altruism without religious inspiration? • Gods of the future • The force behind the universe • Namaste: “I bow to the divine in you” • Gaia: Earth as a single organism • Superintelligence

More Related