370 likes | 539 Views
AI Redefined. Given what we have studied in this course, the author offers a new definition AI is the study of the mechanisms underlying intelligent behavior through the construction and evaluation of artifacts designed to enact those mechanisms
E N D
AI Redefined • Given what we have studied in this course, the author offers a new definition • AI is the study of the mechanisms underlying intelligent behavior through the construction and evaluation of artifacts designed to enact those mechanisms • There are several noteworthy things about this definition • we only commit to intelligent behavior • evaluation is a critical component for the definition – we are no longer relying just on the Turing Test, but instead we look to evaluate the performance of any AI system – although the definition does not tell us how to perform that evaluation • the emphasis here is on artifacts – working systems
AI as a Field • From the definition, we see that AI is (or should be) less concerned about a central or single theory of mind • it is more empirically targeted – working systems • Thus, AI is an engineering pursuit • the creation of working systems • The need for evaluation makes AI a science • while creating systems is well and good, without analyzing the systems to understand why they work or do not work, AI will be working in a void • This definition denotes a paradigm shift away from • philosophy of mind – we do not need to study mind to create mind, human mind might help with models but the formal studies found in philosophy have done little to help • psychology of the human mind – we may gain some understanding of what to do through experimentation but again, this sort of pursuit has led AI astray
PSS Hypothesis Redux • Recall in our first lecture, we considered the PSS Hypothesis – a PSS has the necessary and sufficient means to exhibit intelligent action • the focus on this hypothesis led AI to research • use of symbols to model the world • design of search strategies to apply operators on the given symbols • use of heuristics to guide the search • the use of an empirical approach to research – build (construct) and test to prove your point • While this has helped form a basis for AI, it has also misled AI much as earlier reliance on philosophy and psychology • do we need symbolic knowledge? neural networks show otherwise • do we need heuristic search strategies? model based approaches (whether structural/functional, Bayesian or HMM based seem to indicate less need for this)
Why Has AI Not Succeeded? • The author continues by examining the challenges of • symbolic AI – lack of grounding of symbols, lack of a social context by which symbols are learned in humans, systems constructed using symbolic AI approaches remain too brittle (striving for single interpretations rather than multiple interpretations or contexts, having limited amounts of knowledge) • subsymbolic AI – neural network nodes are not equivalent to neurons, the number of nodes differs substantially from a brain, even of the smallest creatures, while neural networks can be used to construct context-sensitive memories, the actual storage of memories (situations, cases, events) remains beyond our abilities because we do not know how memories are formed in the brain
What Should AI Research? • This is an open question • what we tend to find is that there is less emphasis in AI research on a grand unifying theme or holy grail and far more emphasis on solving fundable problems • Some of the more common approaches found in AI research are to • expand the capabilities of neural networks by identifying new algorithms, combining NNs with approaches like GAs and FL • agent-based approaches • mathematical model-based approaches (for instance, HMMs) • ontologies to provide problem independent knowledge sources • learning from the ground up
My Research: Abduction • Given data to explain, search for possible explanaiers (hypotheses) • Score them • Assemble them into a composite explanation
Hypothesis Assembly Algorithm • Look for essential hypotheses • a hypothesis that is the only way to explain some data • Include/propagate/remove (see the next slide) and repeat from top • Look for superior hypotheses • a hypothesis that is clearly superior at explaining some data because its plausibility value is substantially higher than any other explainer • Include/propagate/remove and repeat from top • Look for better hypotheses • any hypothesis that explains remaining data better than any other • Include/propagate/remove and repeat from top • If there are still data to explain, either guess or quit with unexplained data
Include/Propagate/Remove • If a hypothesis is found as essential, superior or better (or guessed at) then • include it in the composite • propagate the results of including this hypothesis • if this hypothesis is incompatible with other potential explainers, remove those other hypotheses – this could create new essentials • if this hypothesis is either associated with or lends support to another hypothesis, increase that hypothesis’ plausibility – this could create new superior or better hypotheses • If this hypothesis diminishes support from another hypothesis, reduce that hypothesis’ plausibility – in doing so, this could allow another hypothesis to become superior or better • remove all data that the newly included hypothesis explains
Example • Imagine that we have 6 hypotheses available to explain 4 data as shown: • the solid lines indicate what each hypothesis can explain, the dashed lines indicate hypotheses that are mutually exclusive and the dotted line with the (+) indicates support (if H2 is true, H3 is supported and vice versa) • Our best explanation will be generated as follows: • select H4 (confirmed), select H1 (essential) which rules out H6 • removing H6 makes H2 an essential (only way to explain D2) • H2 supports H3 so H2 is a better choice to explain D3 • Best explanation is {H1, H2, H3, H4}
Abduction Applied • Red blood cell typing (data interpretation) • given blood cell reactions, explain them in terms of blood antigens • Medical diagnosis (liver disorder diagnosis) • Speech recognition • ARTREC input was microbeam pellet data, so this was more like a “lip reading” system • Natural language understanding • Theory formation • which theory better supports life on Earth, evolution or creationism? • which theory better supports evidence produced in a trial, a person was the murderer or not? • Hand-written character recognition
Explaining a Character • The features (data) found to be explained for this character are three horizontal lines and two curves • While both the E and F characters were highly rated, “E” can explain all of the features while “F” cannot, so “E” is the better explanation
Top-down Guidance • One benefit of this approach is that, by using domain dependent knowledge • the abductive assembler can increase or decrease individual character hypothesis beliefs based on partially formed explanations • for instance, in the postal mail domain, if the assembler detects that it is working on the zip code (because it already found the city and state on one line), then it can rule out any letters that it thinks it found • since we know we are looking at Saint James, NY, the following five characters must be numbers, so “I” (for one of the 1’s, “B” for the 8, and “O” for the 0 can all be ruled out (or at least scored less highly)
Full Example in a Natural Language Domain
Decision Making • Based on the abduction algorithm • used to solve 3 different problems • grocery shopping list generation • meal planning • departmental course scheduling
Other Research Areas • Classification • automated syntax error debugging • Linux user classification • grammatical errors of non-native English speakers • student error classification CAI tool • Music creation • combination of routine design and genetic algorithms • Automated software creation • combination of case-based reasoning and routine design • select code components from a code library based on function (goals) • use pseudocode plans as prior cases
Research Topic: The Semantic Web • How can we automate the process of using the knowledge available on the Internet? • we need to make the knowledge available in accessible forms (ontologies) • we need to provide a suite of problem solvers that can • find the knowledge they need • make decisions based on the knowledge found • communicate with other problem solvers when the needed knowledge is not available, or when they have a specific subproblem to spawn • communication might require social interactions beyond simple message passing (for instance, polling of many sources, determining if an agent is trustworthy) • migrate to other processors • either because their current processor is busy, or more usefully, because the knowledge needed is located elsewhere (this capability is optional)
Research Topic: Autonomous Vehicles • How can a vehicle be programmed to carry out a “mission” on its own with little or no human intervention? • requires mission planning, path planning, sensing (and sensor interpretation) decision making, reactive planning, failure handling • all of these steps must be done in real-time except mission planning and path planning which can be done prior to the start of the mission • Each type of vehicle has its own unique challenges • airplanes and submarines deal with 3-D, have fewer obstacles to contend with, but also have draft/current • automobiles have to deal with other road traffic, off-road vehicles have to deal with rough terrain • indoor robots deal with human traffic, furniture, walls, etc • many of these vehicles do not use cameras for input but sonar and/or radar instead
Research Topic: Evolving Intelligence • Rodney Brooks from MIT claims that AI needs to evolve through self-learning • in his lab, he has a number of robots placed into an environment • the robots start with a base behavior that is layered • at the lowest layer, behaviors are hard-coded • at higher layers, behaviors can be manipulated through unsupervised learning algorithms so that if a robot does something wrong (falls off a table, runs into a wall), it learns not to do that • eventually, the robots learn useful behaviors like purposeful guidance and goal-directed behaviors • the robots operate somewhat like autonomous agents in that each layer consists of one or more agents that communicate with other agents at the same and different layers
Research Topic: Interfaces • Going beyond what we have already covered with speech recognition and natural language understanding, there are many other ideas where AI is used to assist humans • smart environments • buildings that know which lights to turn on/off and adjust the a/c or heat, cross-walks that help blind people cross the street, • assistive technologies and wearable AI • tech that helps people speak who have lost that ability, prosthetic devices for amputees, intelligent sound and vision canceling devices, smart interfaces used by the military • filtering software (e.g., spam filters) • tutorial programs • recommender systems
Research Topic: Machine Learning • Data mining • Improvements to HMM and Bayesian learning • Other mathematical-models such as support vector machines which use auto-regression analysis, new forms of clustering • EBL, case-based reasoning approaches • Application areas include: • computer vision, natural language processing, speech recognition (training) • search engines • medical diagnosis, bioinformatics • stock market analysis • detecting credit card fraud • adaptive websites
Research Topic: Homeland Security • Given the sheer amount of data available through • communications networks (cell phones, Internet, land-line phones), newspaper ads • video captured from cameras in transportation centers and outdoor cameras • Interpret/recognize threats and identities • facial recognition • recognizing the source of an on-line post (through context of how it was written) or hand-written messages based on hand-writing analysis • identifying based on content, if something constitutes a threat • linking together terrorist websites for easier analysis
AI in Society • 50 years ago people predicted AI as part of our society by 2000 • we don’t have “AI”, but AI is everywhere • smart devices, appliances • SR, NLU • data intrepretation/analysis built into hardware devices to save the diagnosticians a step or two • fuzzy logic controllers • software analysis and data mining tools used in Wall Street, business analysts, economists, etc • autonomous vehicles and robots • AI is pervasive in our society but we don’t have “AI” • aside from a few stand alone systems like Watson, Deep Blue and Cyc
For Example: Agents • Calendar/scheduling agents • Customization of web sites/Recommender agents • Filtering agents (e.g., spam filters) • Shopper agents • Commerce agents • Personal communications, secretary agents • Research tools • Assisting agents (authors, musicians, artists) • Homeland security (data analysis agents) • Sensor interpretation • these agents have earned a new name in our society, softbots
Some Predictions • Next 5-10 years • Work continues on semantic web, robotics/autonomous vehicles, NLP, SR, Vision • Within 10 years • part of the web is annotated for intelligent agent usage • modest intelligent agents are added to a lot of applications software • robotic caretakers reach fruition (but are too expensive for most) • SR reaches a sufficient level so that continuous speech in specific domains is solved • NLP in specific domains is solved • Reliable autonomous vehicles used in specialized cases (e.g., military)
More Predictions • Within 20 years • robotic healthcare regularly available • vision problem largely solved • autonomous vehicles available • intelligent agents part of most software • cognitive prosthetics • semantic web makes up a majority of web pages • computers regularly pass the Turing Test • Within 50 years • nano-technology inserted into human bodies, humans augmented with computer memory and processors • Within 100 years (?) • true (strong) AI
More Predictions • Want to place a bet? These bets are available from www.longbets.org/bets • By 2020, wearable devices will be available that will use speech recognition to monitor and index conversations and can be used as supplemental memories – by Greg Webster (??) • By 2025, at least half of US citizens will have some form of technology embedded in their bodies for ID/tracking – Douglas Hewes (CEO Business Technologies) • By 2029, no computer will have passed the Turing test – by Ray Kurzweil (a well known entrepreneur and technologist) • By 2030, commercial passenger planes will fly pilotless – by Eric Schmidt (CEO Google) • By 2050, no machine intelligence will be self-aware – by Nova Spivack (CEO of Lucid Ventures) • By 2108, a sentient AI will exist as a corporation providing services as well as making its own financial and strategic decisions – by Jane Walter (??)
AI and Ethics • Is AI a good thing? • social concerns: • unemployment • liability and the need for security/safety in critical systems • accuracy • explanation • AI and warfare • security against AI surveillance, privacy • end of the world scenarios (e.g., terminator) • True AI might constitute a moral dilemma • computers are devices, what would it mean to equip a device with intelligence? would it be a slave? would we give AI rights?