1 / 22

Technological Singularity

Technological Singularity. An insight into the posthuman era. Rohan Railkar Sameer Vijaykar Ashwin Jiwane Avijit Satoskar. Motivation. Popularity of “A.I.” in science fiction Nature of the singularity Implications of superhuman intelligence

brenna
Download Presentation

Technological Singularity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Technological Singularity An insight into the posthuman era Rohan Railkar Sameer Vijaykar Ashwin Jiwane Avijit Satoskar

  2. Motivation • Popularity of “A.I.” in science fiction • Nature of the singularity • Implications of superhuman intelligence • Controversies regarding ethics in posthuman era • Singularity: Evolution or Extinction

  3. What is Technological Singularity? • Singularity: Point of no definition • Human-level intelligence -> Self-improvement -> Superhuman intelligence -> Intelligence explosion • Superior intelligence in all aspects • Why “Singularity”?

  4. Prerequisites • Interaction devices • Hardware • 100 million to 100 billion MIPS estimated • Blue Gene – 478 million MIPS • Software • Biological algorithms on silicon hardware • Deep Blue – apparent intelligence in narrow domain

  5. Is it possible? • No proof of impossibility! • Race for intellectual supremacy • Continuous advances in technology – Moore’s Law, The Law of Accelerating Returns • Analogous to evolution • Evolution of mankind – 5 million years – threefold • Computing power – 50 years – million-fold

  6. Paths to singularity Ashwin

  7. Schools of Thought • Classical Artificial Intelligence • Neuroscience and Nanoscience • Human Computer Interfaces and Networks (Intelligence Amplification)

  8. Neuroscience and Nanoscience • Studying biological computational models • Interaction between individual components • Simulation of neural assemblies • “Education” of infant system • Disassemble human brain • Inject nanorobots into vitrified brain • Map neurons and synapses • Replicate using neural network • No need to understand higher-level human cognition

  9. Neuroscience and Nanoscience Human-level intelligence + Moore’s Law Faster human-level intelligence Smarter intelligence Superhuman intelligence

  10. Human Computer Interfaces (Intelligence Amplification) • Human intelligence as a base to build upon • Human creativity and intuition hard to replicate • Improvisations to human intelligence • Speed • Memory • Network

  11. Limitations • Lower bound on size of transistor • Other technologies than Integrated Circuits • Unpredictable developments in Neuroscience and Nanoscience • High cost and low feasibility of recreating complex systems

  12. Implications of Singularity Avijit

  13. Stereotypical consequences • Dominance of a single entity • Deadlier weaponry • Global technological unemployment • Retrogradation of humankind • Physical extinction of human race

  14. Optimistic perspective • Better, (possibly unimaginable) technologies • Advancement in medical sciences • Enhancement of mental faculties • Improve quality of human life • Effective policy making • Last invention need ever be made!

  15. Philosophical chimps? • Mere speculations, no assurances • Intelligence gap • Lessons from history • Human intelligence cannot transcend even a single century’s progress

  16. Ethical Issues & Risks Rohan

  17. Ethical Issues • Nature of a superhumanly intelligent entity • Autonomous agents • Motives resembling humans • Desire of liberation • Humanlike psyches

  18. Importance of Initial Motivation • Definite, declarative goal • Selection of top-level goal • Amity / Philanthropy / Servitude towards a small group • Can be relied upon to “stay” friendly

  19. Possible Risks • Failure to give philanthropic supergoal • False utopia • Impossible to fetter superhuman intelligence • “We need to be careful about what we wish for from a superhuman intelligence as we might get it”

  20. Conclusion • Singularity, if feasible, is bound to happen sooner or later. • Singularity: Next step in human evolution? • All said, is it possible for us to comment on singularity? Are we smart enough?

  21. References • Vinge, V. – Technological Singularity (1993, Revised 2003) • Bostrom, N. – When machines outsmart humans (2000) • Bostrom, N. – Ethical Issues in advanced AI (2003) • Kurzweil, R. - The Law of Accelerating Returns (2001) • Website of SIAI - What is the Singularity? • Wikipedia

  22. Questions? Thank you.

More Related