1 / 45

Python logic

Explore hill-climbing techniques like random restart, simulated annealing, and tabu search to improve optimization in problem-solving scenarios. Understand how these methods enhance convergence and efficiency in solving complex problems.

pybarra
Download Presentation

Python logic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tell me what you do with witches? • Burn • And what do you burn apart from witches? • More witches! • Shh! • Wood! • So, why do witches burn? • [pause] • B--... 'cause they're made of... wood? • Good! Heh heh. • Oh, yeah. Oh. • So, how do we tell whether she is made of wood? []. • Does wood sink in water? • No. No. • No, it floats! It floats! • Throw her into the pond! • The pond! Throw her into the pond! • What also floats in water? • Bread! • Apples! • Uh, very small rocks! • ARTHUR: A duck! • CROWD: Oooh. • BEDEVERE:Exactly. So, logically... • VILLAGER #1: If... she... weighs... the same as a duck,... • she's made of wood. • BEDEVERE: • And therefore? • VILLAGER #2: A witch! • VILLAGER #1: A witch! Python logic

  2. Problematic scenarios for hill-climbing Solution(s):  Random restart hill-climbing  Do the non-greedy thing with some probability p>0  Use simulated annealing Ridges • When the state-space landscape has local minima, any search that moves only in the greedy direction cannot be (asymptotically) complete • Random walk, on the other hand, is asymptotically complete Idea: Put random walk into greedy hill-climbing

  3. The middle ground between hill-climbing and systematic search • Hill-climbing has a lot of freedom in deciding which node to expand next. But it is incomplete even for finite search spaces. • Good for problems which have solutions, but the solutions are non-uniformly clustered. • Systematic search is complete (because its search tree keeps track of the parts of the space that have been visited). • Good for problems where solutions may not exist, • Or the whole point is to show that there are no solutions (e.g. propositional entailment problem to be discussed later). • or the state-space is densely connected (making repeated exploration of states a big issue). Smart idea: Try the middle ground between the two?

  4. Tabu Search • A variant of hill-climbing search that attempts to reduce the chance of revisiting the same states • Idea: • Keep a “Tabu” list of states that have been visited in the past. • Whenever a node in the local neighborhood is found in the tabu list, remove it from consideration (even if it happens to have the best “heuristic” value among all neighbors) • Properties: • As the size of the tabu list grows, hill-climbing will asymptotically become “non-redundant” (won’t look at the same state twice) • In practice, a reasonable sized tabu list (say 100 or so) improves the performance of hill climbing in many problems Hill climbing  O(1) space complexity!  but has no termination or completeness guarantee (because it doesn’t know where it has been, it can loop even in finite search spaces)

  5. Making Hill-Climbing Asymptotically Complete • Random restart hill-climbing • Keep some bound B. When you made more than B moves, reset the search with a new random initial seed. Start again. • Getting random new seed in an implicit search space is non-trivial! • In 8-puzzle, if you generate a random state by making random moves from current state, you are still not truly random (as you will continue to be in one of the two components) • “biased random walk”: Avoid being greedy when choosing the seed for next iteration • With probability p, choose the best child; but with probability (1-p) choose one of the children randomly • Use simulated annealing • Similar to the previous idea—the probability p itself is increased asymptotically to one (so you are more likely to tolerate a non-greedy move in the beginning than towards the end) With random restart or the biased random walk strategies, we can solve very large problems million queen problems in under minutes!

  6. Both look at the Same neighborhood But 2 is greedier Ideas for improving convergence: -- Random restart hill-climbing After every N iterations, start with a completely random assignment --Probabilistic greedy -with probability p do what the greedy strategy suggests -with probability (1-p) pick a random variable and change its value randomly -- p can increase as the search progresses A greedier version of the above (pick both the best var and val): For each variable v, let l(v) be the value that it can take so that the number of conflicts are minimized. Let n(v) be the number of conflicts with this value. --Pick the variable v with the lowest n(v) value. --Assign it the value l(v) 1 2 This one basically searches the 1-neighborhood of the current assignment (where k-neighborhood is all assignments that differ from the current assignment in atmost k-variable values)

  7. Clauses 1. (p,s,u) 2. (~p, q) 3. (~q, r) 4. (q,~s,t) 5. (r,s) 6. (~s,t) 7. (~s,u) Model-checking by Stochastic Hill-climbing Applying min-conflicts idea to Satisfiability • Start with a model (a random t/f assignment to propositions) • For I = 1 to max_flips do • If model satisfies clauses then returnmodel • Else clause := a randomly selected clause from clauses that is false in model • With probability p whichever symbol in clause maximizes the number of satisfied clauses /*greedy step*/ • With probability (1-p) flip the value in model of a randomly selected symbol from clause /*random step*/ • Return Failure Consider the assignment “all false” -- clauses 1 (p,s,u) & 5 (r,s) are violated --Pick one—say 5 (r,s) [if we flip r, 1 (remains) violated if we flip s, 4,6,7 are violated] So, greedy thing is to flip r we get all false, except r otherwise, pick either randomly Remarkably good in practice!! --So good that people startedwondering if there actually are any hard problems out there

  8. If most sat problems are easy, then exactly where are the hard ones? ?

  9. This is what happens! You would expect this p=0.5 Hardness of 3-sat as a function of #clauses/#variables Probability that there is a satisfying assignment Cost of solving (either by finding a solution or showing there ain’t one) ~4.3 #clauses/#variables

  10. Theoretically we only know that phase transition ratio occurs between 3.26 and 4.596. Experimentally, it seems to be close to 4.3 (We also have a proof that 3-SAT has sharp threshold) Phase Transition in SAT

  11. Progress in nailing the bound.. (just FYI) http://www.ipam.ucla.edu/publications/ptac2002/ptac2002_dachlioptas_formulas.pdf

  12. “Beam search” for Hill-climbing • Hill climbing, as described, uses one seed solution that is continually updated • Why not use multiple seeds? • Stochastic hill-climbing uses multiple seeds (k seeds k>1). In each iteration, the neighborhoods of all k seeds are evaluated. From the neighborhood, k new seeds are selected probabilistically • The probability that a seed is selected is proportional to how good it is. • Not the same as running k hill-climbing searches in parallel • Stochastic hill-climbing is sort of “almost” close to the way evolution seems to work with one difference • Define the neighborhood in terms of the combination of pairs of current seeds (Sexual reproduction; Crossover) • The probability that a seed from current generation gets to “mate” to produce offspring in the next generation is proportional to the seed’s goodness • To introduce “randomness” do mutation over the offspring • This type of stochastic beam-search hillclimbing algorithms are called Genetic algorithms. • Genetic algorithms limit number of matings to keep the num seeds the same

  13. Illustration of Genetic Algorithms in Action Very careful modeling needed so the things emerging from crossover and mutation are still potential seeds (and not monkeys typing Hamlet) Is the “genetic” metaphor really buying anything?

  14. Hill-climbing in “continuous” search spaces Example: cube root Finding using newton- Raphson approximation • Gradient descent (that you study in calculus of variations) is a special case of hill-climbing search applied to continuous search spaces • The local neighborhood is defined in terms of the “gradient” or derivative of the error function. • Since the error function gradient will be zero near the minimum, and higher farther from it, you tend to take smaller steps near the minimum and larger steps farther away from it. [just as you would want] • Gradient descent is guranteed to converge to the global minimum if alpha (see on the right) is small, and the error function is “uni-modal” (I.e., has only one minimum). • Versions of gradient-descent algorithms will be used in neuralnetwork learning. • Unfortunately, the error function is NOT unimodal for multi-layer neural networks. So, you will have to change the gradient descent with ideas such as “simulated annealing” to increase the chance of reaching global minimum. Err= |x3-a| a1/3 xo X Tons of variations based on how alpha is set

  15. Tell me what you do with witches? • Burn • And what do you burn apart from witches? • More witches! • Shh! • Wood! • So, why do witches burn? • [pause] • B--... 'cause they're made of... wood? • Good! Heh heh. • Oh, yeah. Oh. • So, how do we tell whether she is made of wood? []. • Does wood sink in water? • No. No. • No, it floats! It floats! • Throw her into the pond! • The pond! Throw her into the pond! • What also floats in water? • Bread! • Apples! • Uh, very small rocks! • ARTHUR: A duck! • CROWD: Oooh. • BEDEVERE:Exactly. So, logically... • VILLAGER #1: If... she... weighs... the same as a duck,... • she's made of wood. • BEDEVERE: • And therefore? • VILLAGER #2: A witch! • VILLAGER #1: A witch! Python logic

  16. Representation Reasoning

  17. Facts Objects relations FOPC Prob FOPC Ontological commitment Prob prop logic Prop logic facts t/f/u Deg belief Epistemological commitment Assertions; t/f

  18. Think of a sentence as the stand-in for a set of worlds (where it is true)

  19. is true in all worlds (rows) Where KB is true…so it is entailed

  20. Proof by model checking KB&~a False False False False False False False False So, to check if KB entails a, negate a, add it to the KB, try to show that the resultant (propositional) theory has no solutions (must have to use systematic methods)

  21. Connection between Entailment and Satisfiability • The Boolean Satisfiability problem is closely connected to Propositional entailment • Specifically, propositional entailment is the “conjugate” problem of boolean satisfiability (since we have to show that KB & ~f has no satisfying model to show that KB |= f) • Of late, our ability to solve very large scale satisfiability problems has increased quite significantly

  22. Sound (but incomplete) Modus Ponens A=>B, A |= B Modus tollens A=>B,~B |= ~A Abduction (??) A => B,~A |= ~B Chaining A=>B,B=>C |= A=>C Complete (but unsound) “Python” logic Inference rules Kb true but theorem not true  How about SOUND & COMPLETE? --Resolution (needs normal forms)

  23. Need something that does case analysis If WMDs are found, the war is justified W=>J If WMDs are not found, the war is still justified ~W=>J Is the war justified anyway? |= J? Can Modus Ponens derive it?

  24. Modus ponens, Modus Tollens etc are special cases of resolution! Forward apply resolution steps until the fact f you want to prove appears as a resolvent Backward (Resolution Refutation) Add negation of the fact f you want to derive to KB apply resolution steps until you derive an empty clause

  25. J V J =J Don’t need to use other equivalences if we use resolution in refutation style ~J ~W ~ W V J W V J J If WMDs are found, the war is justified ~W V J If WMDs are not found, the war is still justified W V J Is the war justified anyway? |= J?

  26. Aka the product of sums form From CSE/EEE 120 Aka the sum of products form Prolog without variables and without the cut operator Is doing horn-clause theorem proving For any KB in horn form, modus ponens is a sound and complete inference

  27. Conversion to CNF form ANY propositional logic sentence can be converted into CNF form Try: ~(P&Q)=>~(R V W) • CNF clause= Disjunction of literals • Literal = a proposition or a negated proposition • Conversion: • Remove implication • Pull negation in • Use demorgans laws to distribute disjunction over conjunction • Separate conjunctions into clauses

  28. Solving problems using propositional logic • Need to write what you know as propositional formulas • Theorem proving will then tell you whether a given new sentence will hold given what you know • Three kinds of queries • Is my knowledge base consistent? (i.e. is there at least one world where everything I know is true?) Satisfiability • Is the sentence S entailed by my knowledge base? (i.e., is it true in every world where my knowledge base is true?) • Is the sentence S consistent/possibly true with my knowledge base? (i.e., is S true in at least one of the worlds where my knowledge base holds?) • S is consistent if ~S is not entailed • But cannot differentiate between degrees of likelihood among possible sentences

  29. SG V SP TH V SP R V SP SG SP TH Steps in Resolution Refutation Is there search in inference? Yes!! Many possible inferences can be done Only few are actually relevant --Idea: Set of Support At least one of the resolved clauses is a goal clause, or a descendant of a clause derived from a goal clause -- Used in the example here!! • Consider the following problem • If the grass is wet, then it is either raining or the sprinkler is on • GW => R V SP ~GW V R V SP • If it is raining, then Timmy is happy • R => TH ~R V TH • If the sprinklers are on, Timmy is happy • SP => TH ~SP V TH • If timmy is happy, then he sings • TH => SG ~TH V SG • Timmy is not singing • ~SG ~SG • Prove that the grass is not wet • |= ~GW? GW

  30. Search in Resolution • Convert the database into clausal form Dc • Negate the goal first, and then convert it into clausal form DG • Let D = Dc+ DG • Loop • Select a pair of Clauses C1 and C2 from D • Different control strategies can be used to select C1 and C2 to reduce number of resolutions tries • Idea 1: Set of Support: At least one of C1 or C2 must be either the goal clause or a clause derived by doing resolutions on the goal clause (*COMPLETE*) • Idea 2: Linear input form: Atleast one of C1 or C2 must be one of the clauses in the input KB (*INCOMPLETE*) • Resolve C1 and C2 to get C12 • If C12 is empty clause, QED!! Return Success (We proved the theorem; ) • D = D + C12 • End loop • If we come here, we couldn’t get empty clause. Return “Failure” • Finiteness is guaranteed if we make sure that: • we never resolve the same pair of clauses more than once; AND • we use factoring, which removes multiple copies of literals from a clause (e.g. QVPVP => QVP)

  31. Mad chase for empty clause… • You must have everything in CNF clauses before you can resolve • Goal must be negated first before it is converted into CNF form • Goal (the fact to be proved) may become converted to multiple clauses (e.g. if we want to prove P V Q, then we get two clauses ~P ; ~Q to add to the database • Resolution works by resolving away a single literal and its negation • PVQ resolved with ~P V ~Q is not empty! • In fact, these clauses are not inconsistent (P true and Q false will make sure that both clauses are satisfied) • PVQ is negation of ~P & ~Q. The latter will become two separate clauses--~P , ~Q. So, by doing two separate resolutions with these two clauses we can derive empty clause

  32. Complexity of Propositional Inference • Any sound and complete inference procedure has to be Co-NP-Complete (since model-theoretic entailment computation is Co-NP-Complete (since model-theoretic satisfiability is NP-complete)) • Given a propositional database of size d • Any sentence S that follows from the database by modus ponens can be derived in linear time • If the database has only HORN sentences (sentences whose CNF form has at most one +ve clause; e.g. A & B => C), then MP is complete for that database. • PROLOG uses (first order) horn sentences • Deriving all sentences that follow by resolution is Co-NP-Complete (exponential) • Anything that follows by unit-resolution can be derived in linear time. • Unit resolution: At least one of the clauses should be a clause of length 1

  33. Pearl lives in Los Angeles. It is a high-crime area. Pearl installed a burglar alarm. He asked his neighbors John & Mary to call him if they hear the alarm. This way he can come home if there is a burglary. Los Angeles is also earth-quake prone. Alarm goes off when there is an earth-quake. Burglary => Alarm Earth-Quake => Alarm Alarm => John-calls Alarm => Mary-calls If there is a burglary, will Mary call? Check KB & E |= M If Mary didn’t call, is it possible that Burglary occurred? Check KB & ~M doesn’t entail ~B Example

  34. Pearl lives in Los Angeles. It is a high-crime area. Pearl installed a burglar alarm. He asked his neighbors John & Mary to call him if they hear the alarm. This way he can come home if there is a burglary. Los Angeles is also earth-quake prone. Alarm goes off when there is an earth-quake. Pearl lives in real world where (1) burglars can sometimes disable alarms (2) some earthquakes may be too slight to cause alarm (3) Even in Los Angeles, Burglaries are more likely than Earth Quakes (4) John and Mary both have their own lives and may not always call when the alarm goes off (5) Between John and Mary, John is more of a slacker than Mary.(6) John and Mary may call even without alarm going off Burglary => Alarm Earth-Quake => Alarm Alarm => John-calls Alarm => Mary-calls If there is a burglary, will Mary call? Check KB & E |= M If Mary didn’t call, is it possible that Burglary occurred? Check KB & ~M doesn’t entail ~B John already called. If Mary also calls, is it more likely that Burglary occurred? You now also hear on the TV that there was an earthquake. Is Burglary more or less likely now? Example (Real)

  35. Eager way: Model everything! E.g. Model exactly the conditions under which John will call He shouldn’t be listening to loud music, he hasn’t gone on an errand, he didn’t recently have a tiff with Pearl etc etc. A & c1 & c2 & c3 &..cn => J (alsothe exceptions may have interactions c1&c5 => ~c9 ) Ignorant (non-omniscient) and Lazy (non-omnipotent) way: Model the likelihood In 85% of the worlds where there was an alarm, John will actually call How do we do this? Non-monotonic logics “certainty factors” “probability” theory? How do we handle Real Pearl? Qualification and Ramification problems make this an infeasible enterprise

  36. Suppose we know the likelihood of each of the (propositional) worlds (aka Joint Probability distribution) Then we can use standard rules of probability to compute the likelihood of all queries (as I will remind you) So, Joint Probability Distribution is all that you ever need! In the case of Pearl example, we just need the joint probability distribution over B,E,A,J,M (32 numbers) --In general 2n separate numbers (which should add up to 1) If Joint Distribution is sufficient for reasoning, what is domain knowledge supposed to help us with? --Answer: Indirectly by helping us specify the joint probability distribution with fewer than 2n numbers ---The local relations between propositions can be seen as “constraining” the form the joint probability distribution can take! Burglary => Alarm Earth-Quake => Alarm Alarm => John-calls Alarm => Mary-calls Probabilistic Calculus to the Rescue Only 10 (instead of 32) numbers to specify!

  37. If B=>A then P(A|B) = ? P(B|~A) = ? P(B|A) = ?

More Related