1 / 18

AI: Paradigm Shifts

AI: Paradigm Shifts. AI research trends continue to shift Moving AI from a stand-alone component, to a component within other software systems consider the original goal was to build AI and later to construct knowledge-based systems

quinto
Download Presentation

AI: Paradigm Shifts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AI: Paradigm Shifts • AI research trends continue to shift • Moving AI from a stand-alone component, to a component within other software systems • consider the original goal was to build AI and later to construct knowledge-based systems • now the goal is autonomous (agents) or semi-autonomous (robots) software systems or systems that work with humans (data mining, decision support tools) • Moving away from strictly symbolic computing or strictly connectionist computing to some hybrid approach • Bayesian or HMM approaches along with symbolic models or rules • Neural networks and HMMs • Neural networks and fuzzy logic • Genetic algorithms and neural networks • Concentrating on machine learning of some form • reinforcement learning is becoming more and more common • Bayesian learning is easy but also leads to promising results • Knowledge representations are moving toward ontologies rather than stand-along knowledge bases

  2. AI Approaches in the Future • Obviously, we can’t predict now what approaches will be invented in the next 5-10 years or how these new approaches will impact or replace current approaches • However, the following approaches are finding uses today and so should continue to be used • data mining on structured data • machine learning approaches – Bayesian and neural network, support vector machines – to create classifiers • case based reasoning for planning, model-based reasoning systems • rules will continue to lie at the heart of most approaches (except neural networks) • mathematical modeling of various types will continue, particularly in vision and other perceptual areas • Interconnected (networks) software agents • AI as part of productivity software/tools

  3. AI Research For the Short Term • Reinforcement learning • Applied to robotics • Semantic web • Semi-annotated web pages • Natural language understanding • Tying symbolic rule-based approaches with probabilistic approaches, especially for semantic understanding, discourse and pragmatic analysis • Social networks • Modeling and reasoning about the dynamics of social networks and communities including email analysis and web site analysis • Multi-agent coordination • How will multiple agents communicate, plan and reason together to solve problems such as disaster recovery and system monitoring (e.g., life support on a space station, power plant operations) • Bioinformatics algorithms • While many bioinformatics algorithms do not use AI, there is always room for more robust search algorithms

  4. Some Predictions? • Next 5-10 years • Work continues on semantic web, robotics/autonomous vehicles, NLP, SR, Vision • Within 10 years • part of the web is annotated for intelligent agent usage • modest intelligent agents are added to a lot of applications software • robotic caretakers reach fruition (but are too expensive for most) • SR reaches a sufficient level so that continuous speech in specific domains is solved • NLP in specific domains is solved • Within 20 years • robotic healthcare made regularly available • vision problem is largely solved • autonomous vehicles are available • intelligent agents are part of most software • cognitive prosthetics • the semantic web makes up a majority of web pages in terms of available knowledge • Within 50 years • nano-technology combines with agent technology, people have intelligent machines running through their bodies! • humans are augmented with computer memory and processors • computers are inventing/creating useful artifacts and making decisions • Within 100 years (?) • true (strong) AI

  5. Social Concerns: Unemployment • According to economics experts, computer automation has created as many jobs as it has replaced • Automation has shifted the job skills from blue collar to white collar, thus many blue collar jobs have been eliminated (assembly line personnel, letter sorters, etc) • What about AI? • Does AI create as many jobs as it makes obsolete? • probably not, AI certainly requires programmers, knowledge engineers, etc, but once the system is created, there is no new job creation • Just what types of jobs might become obsolete because of AI? • secretarial positions because of intelligent agents? • experts (e.g., doctors, lawyers) because of expert systems? • teachers because of tutorial systems? • management because of decision support systems? • security (including police), armed forces, intelligence community?

  6. Social Concerns: Liability • Who is to blame when an AI system goes wrong? • Imagine these scenarios: • autonomous vehicle causes multi-car pile-up on the highway • Japanese subway car does not stop correctly causing injuries • expert medical system offers wrong diagnosis • machine translation program incorrectly translating statements between diplomats leading to conflict or sanctions • We cannot place blame on the AI system itself • According to law, liability in the case of an AI system can be placed on all involved: • the user(s) for not using it correctly • the programmers/knowledge engineers • the people who supplied the knowledge (experts, data analysts, etc) • management and researchers involved • AI systems will probably require more thorough testing than normal software systems • yet it will be impossible to try all possible cases to find logical flaws • at what point in the software process should we begin to trust the AI system?

  7. Medical accelerator system to create high energy electron beams used to destroy tumors, can convert the beam to x-ray photons for radiation treatments Therac-25 is both hardware and software earlier versions, Therac-6 and Therac-20, were primarily the hardware, with minimal software support Therac-6 and -20 were produced by two companies, but Therac-25 was produced only be one of the two companies (AECL) , borrowing software routines from Therac-6 (and unknown to the quality assurance manager, from Therac-20) 11 units sold (5 in US, 6 in Canada) in the early to mid 80s, during this time, 6 people were injured (several died) from radiation overdoses Case Study: Therac-25

  8. The 6 Reported Accidents • 1995: woman undergoing lumpectomy receives 15,000-20,000 rads – eventually she loses her breast due to over exposure to radiation, also loses ability to use arm and shoulder • treatment printout facility of Therac-25 was not operating during this session and therefore AECL cannot recreate the accident • 1995: patient treated for carcinoma of cervix – user interface error causes overexposure of 13–17,000 rads, patient dies in 4 months of extremely virulent cancer, had she survived total hip replacement surgery would have been needed • 1985: treatment for erythema on right hip results in burning on hip, patient still alive with minor disability and scarring • 1986: patient receives overdoes caused by software error, 16,500-25,000 rads, dies within 5 months • 1986: same facility & error, patient receives 25,000 rads & dies within 3 weeks • 1987: AECL had “fixed” all of the previously problems, new error of hardware coupled with user interface and operator error results in a patient, who was supposed to get 86 rads being given 8-10,000 rads, patient dies 3 months later • note: Therac-20 had hardware problems which would have resulted in the same errors from patients 4 and 5 above, but because the safety interlocks were in hardware, the error never arose during treatment to harm a patient

  9. Causes of Therac-25 Accidents • Therac-20 used hardware interlocks for controlling hardware settings and ensuring safe settings before beam was emitted • User interface was buggy • Instruction manual omitted malfunction code descriptions so that users would not know why a particular shut down had occurred • Hardware/software mismatch led to errors with turntable alignment • Software testing produced a software fault tree which seemed to have made up likelihoods for given errors (there was no justification for the values given) • In addition, the company was slow to respond to injuries, and often reported “we cannot recreate the error”, they also failed to report injuries to other users until forced to by the FDA • Investigators found that the company had “less than acceptable” software engineering practices • Lack of useful user feedback from the Therac-25 system when it would shut down, failure reporting mechanism off-line during one of the accidents

  10. Safety Needs in Critical Systems • As AI systems become more prevalent, it becomes more and more important that proper software engineering methodologies are applied to ensure correctness • we especially find this true in critical systems (e.g., Therac-25, International Space Station) and real-time systems (e.g., autonomous vehicles, subway system) • Some suggestions: • Increase the usage of formal specification languages (e.g., Z, VDM, Larch) • Add hazard analysis to requirements analysis • Formal verification should be coupled with formal specification • statistical testing, code and document inspection, automated theorem provers • Develop new techniques for software development that encapsulate safety • formal specifications for component retrieval when using previously written classes to limit the search for useful/usable components • reasoning on externally visible system behavior, reasoning about system failures (this is currently being researched to be applied to life support systems on the International Space Station)

  11. Social Concerns: Explanation • An early complaint of AI systems was their inability to explain their conclusions • Symbolic approaches (including fuzzy logic, rule based systems, case based reasoning, and others) permit the generation of explanations • depending on the approach, the explanation might be easy or difficult to generate • Neural network approaches have no capacity to explain • Bayesian approaches might offer explanations, although typically they will be limited to conditional probabilities • Bayes nets can at least offer a chain of logic • As AI researchers have moved on to more mathematical approaches, they have lost the ability (or given up on the ability) to have the AI system explain itself • How important will it be for our AI system to explain itself? • is it important for speech recognition? • is it important for a diagnostic system? • is it important for an autonomous vehicle?

  12. Social Concerns: AI and Warfare • What are the ethics of fighting a war without risking our lives? • Consider that we can bomb from a distance without risk to troops – since this lessens our risk, does it somehow increase our decision to go to war? • How would AI impact warfare? • mobile robots instead of troops on the battlefield • predator drone aircraft for surveillance and bombing • smart weapons • better intelligence gathering • While these applications of AI give us an advantage, might they also influence our decision to go to war more easily? • On the other hand, can we trust our fighting to AI systems? • Could they kill innocent bystanders? • Should we trust an AI system’s intelligence report?

  13. Social Concerns: Privacy • This is primarily a result of data mining • We know there is a lot of data out there about us as individuals • can data mining be used to invade our privacy? • what is the threat of data mining to our privacy? • will companies misuse the personal information that they might acquire? • We might extend our concern to include surveillance – why should AI be limited to surveillance on (hypothetical) enemies? • Speech recognition might be used to transcribe all telephone conversations • NLU might be used to intercept all emails and determine whether the content of a message is worth investigating • We are also seeing greater security mechanisms implemented at areas of national interests (airports, train stations, malls, sports arenas, monuments, etc) – cameras for instance • previously it was thought that people would not be hired to watch everyone, but computers could • with AI, they can also make inferences about what we are doing

  14. What If Strong AI Becomes a Reality? • Machines to do our work for us leaves us with • more leisure time • the ability to focus on educational pursuits, research, art • computers could teach our young (is this good or bad?) • computers could be in charge of transportation thus reducing accidents, and possibly even saving us on fuel • computers may even be able to discover and create for us • cures to diseases, development of new power sources, better computers, etc • On the negative side, this could also lead us toward • debauchery (with leisure time we might degrade to decadence) • consider the ancient Romans who had plenty of free time because of slavery • unemployment which itself could lead to economic disaster • if computers can manufacture for us anything we want, this can also lead to economic problems • We might become complacent and lazy and therefore not continue to do research or development

  15. AI: The Moral Dilemma • Researchers (scientists) have often faced the ethical dilemmas inherent with the product of their work • Assembly line: • Positive outcomes: increased production and led to economic boons • Negative outcomes: increased unemployment, dehumanized many processes, and led to increased pollution • Atomic research: • Positive outcomes: ended world war II and provided nuclear power, • Negative outcomes: led to the cold war and the constant threat of nuclear war, creates nuclear waste, and now we worry about WMDs • Many researchers refused to go along with the US government’s quest to research atomic power once they realized that the government wanted it for atomic bombs • They feared what might come of using the bombs • But did they have the foresight to see what other problems would arise (e.g., nuclear waste) or the side effect benefits (eventually, the arms race caused the collapse of the Soviet Union because of expense) • What side effects might AI surprise us with?

  16. Long-term Technological Advances • If we extrapolate prior growth of technology, we might anticipate: • enormous bandwidth (terabit per second), secondary storage (petabyte) and memory capacities (terabyte) by 2030 • in essence, we could record all of our experiences electronically for our entirely lifetime and store them on computer, we can also download any experience across a network quickly • Where might this lead us? • Teleportation – combining network capabilities, virtual reality and AI • “Time travel” – being able to record our experiences, thoughts and personalities, in a form of agent representative, so that future generations can communicate with us – combining machine learning, agents, NLU • Immortality – the next step is to then upload these representatives into robotic bodies, while these will not be us, our personalities can live on, virtually forever • since we will be able to make copies and move them into new robotic bodies as needed

  17. Ethical Stance of Creating True AI • Today we use computers as tools • software is just part of the tool • AI is software • will we use it as a tool? • Does this make us slave masters? • ethically, should we create slaves? • if, at some point, we create strong AI, do we set it free? • what rights might an AI have? • would you permit your computer to go on strike? • would you care if your computer collects data on you and trades it for software or data from another computer? • can we ask our AI programs to create better AI programs and thus replace themselves with better versions? • What are the ethics of copying AI? • we will presumably be able to mass produce the AI software and distribute it, which amounts essentially to cloning • humans are mostly against human cloning, what about machine cloning?

  18. End of the World Scenario? • When most people think of AI, they think of • AI run amok • Terminator, Matrix, etc (anyone remember Colossus: The Forbin Project?) • Would an AI system with a will of its own (whether this is self-awareness or just goal-oriented) want to take over mankind or kill us all? • how plausible are these scenarios? • It might be equally likely that an AI that has a will of its own would just refuse to work for us • might AI decide that our problems/questions are not worthy of its time? • might AI decide to work on its own problems? • Can we control AI to avoid these problems? • Asimov’s 3 laws of robotics are fiction, can we make them reality? • How do we motivate an AI? How do we reward it?

More Related