680 likes | 856 Views
1321. CS. CS1321: Introduction to Programming. Georgia Institute of Technology College of Computing Lecture 19 October 29, 2001 Fall Semester. Today’s Menu. Generative Recursion: QuickSort Generative Recursion: Fractals. Last time: Generative Recursion.
E N D
1321 CS
CS1321:Introduction to Programming Georgia Institute of Technology College of Computing Lecture 19 October 29, 2001 Fall Semester
Today’s Menu • Generative Recursion: QuickSort • Generative Recursion: Fractals
Last time: Generative Recursion Up until the last lecture, we had been working with a form of recursion that had been driven by the “structure” or characteristics of the data types passed to it. Functions that processed a list of TAs or a tree full of numbers were “shaped” by the data definitions that they involved. A List or a Tree has built within it the idea that we will recur as we process the data as well as the idea that we will eventually terminate as we process the data.
Processing a List (define (process-lon in-lon) (cond ((empty? in-lon) …) (else …(first in-lon) …(process-lon (rest in-lon)))) By simply having our code reflect the data definition, most of our code was written for us! These functions are driven by the “structure” of the data definition. Processing a Binary Tree (define (process-BT in-BT) (cond ((not in-BT) …) (else …(node-data in-BT) …(process-BT (node-left in-BT)) …(process-BT (node-right in-BT)))))
Last time: Generative Recursion As we began to discover last time, not all functions are shaped by the structure of the data being passed in. There exists a class of recursive functions for which the structure of the data being passed in is merely a side note to the recursive process. These functions generated “new” data at each recursive step and relied on an algorithmic process to determine whether or not they should terminate.
Last Time: Our Examples Our recursion in this case is driven by the location of the ball relative to the pocket, not by any intrinsic properties of the data definition.
Last Time: Our Examples Merge Sort, rather than being driven by definition of the list, is driven by the size of the data being passed in.
Sorting Last time we explored the idea behind merge-sort, a generative sorting algorithm that employed a “divide-and-conquer” methodology to sort a sequence of data elements into ascending order. The algorithm for merge-sort involved splitting our list into two components of equal size, sorting each half recursively, and merging the result.
Sorting If we examine the code, we find that the actual “sorting” of our elements didn’t take place until we had reached our termination condition (the list is empty), and were returning from our recursive calls. (define (merge-sort lst) (cond [(empty? lst) empty] [else (local ((define halves (divide lst))) (cond [(empty? (first halves)) (first (rest halves))] [(empty? (first (rest halves))) (first halves)] [else (merge (merge-sort (first halves)) (merge-sort (first (rest halves))))]))]))
(merge (merge-sort (first halves)) (merge-sort (first (rest halves))))]))])) Sorting If we examine the code, we find that the actual “sorting” of our elements didn’t take place until we had reached our termination condition (the list is empty), and were returning from our recursive calls. (define (merge-sort lst) (cond [(empty? lst) empty] [else (local ((define halves (divide lst))) (cond [(empty? (first halves)) (first (rest halves))] [(empty? (first (rest halves))) (first halves)] [else (merge (merge-sort (first halves)) (merge-sort (first (rest halves))))]))]))
Quick Sort Quick sort is another generative “divide and conquer” algorithm that involves splitting our sequence of data into parts and sorting each component recursively. Unlike merge sort, however, quick sort performs the actual sorting as the sequence is being split. In terms of performance (which we’ll formally define in the next few weeks), quick sort is considered to be a “better” algorithm than merge sort.
Quick Sort: The basic premise • The basic idea of quick sort involves splitting up our list around a pivot item. • We arbitrarily chose an element of our list to be a pivot item. • We then separate our list into two components: • those elements that are smaller than our pivot • those elements that are larger than our pivot • We recursively sort the two components, and join the results together with our pivot item to form a sorted list.
36 23 45 14 6 67 33 42 Here we start off with a list of unsorted elements. We arbitrarily choose a pivot point about which to organize our list. For the sake of expediency, let’s just choose the first element* of our list to be our pivot. * In many theory books, there are whole chapters dedicated to the process of choosing which element to choose as the optimal pivot point. We’re just keeping things simple.
36 23 45 14 6 67 33 42 Now, we have to organize our list about our pivot point.
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 We start recurring on our two lists.
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 The pivot points for each of our sub-lists.
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 14 6 23 33 42 45 67
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 14 6 23 33 42 45 67 6 14
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 14 6 23 33 42 45 67 6 14 6 14
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 14 6 23 33 42 45 67 6 14 6 14 6 14 23 33 42 45 67
36 23 45 14 6 67 33 42 23 14 6 33 36 45 67 42 14 6 23 33 42 45 67 6 14 6 14 6 14 23 33 42 45 67 6 14 23 33 36 42 45 67
36 23 45 14 6 67 33 42 6 14 23 33 36 42 45 67
Quicksort: the code • We arbitrarily chose an element of our list to be a pivot item. • We then separate our list into two components: • those elements that are smaller than our pivot • those elements that are larger than our pivot • We recursively sort the two components, and join the results together with our pivot item to form a sorted list.
Quicksort: the code • (define (quick-sort in-lon) • We arbitrarily chose an element of our list to be a pivot item. • We then separate our list into two components: • those elements that are smaller than our pivot • those elements that are larger than our pivot • We recursively sort the two components, and join the results together with our pivot item to form a sorted list.
Quicksort: the code • (define (quick-sort in-lon) • (cond ((empty? in-lon) empty) • (else (local ((define pivot (first in-lon))) • We then separate our list into two components: • those elements that are smaller than our pivot • those elements that are larger than our pivot • We recursively sort the two components, and join the results together with our pivot item to form a sorted list.
Quicksort: the code • (define (quick-sort in-lon) • (cond ((empty? in-lon) empty) • (else (local ((define pivot (first in-lon))) • …(smaller-items in-lon pivot) • …(larger-items in-lon pivot) • )))) • We recursively sort the two components, and join the results together with our pivot item to form a sorted list.
Quicksort: the code (define (quick-sort in-lon) (cond ((empty? in-lon) empty) (else (local ((define pivot (first in-lon))) (append (quick-sort (smaller-items in-lon pivot)) (list pivot) (quick-sort (larger-items in-lon pivot)) )))))
Quicksort: the code The result of quick-sort is a sorted list of numbers. If one list contains items smaller than the pivot, and one contains items bigger than the pivot, the result we’re looking for should be: Smaller + pivot + larger (define (quick-sort in-lon) (cond ((empty? in-lon) empty) (else (local ((define pivot (first in-lon))) (append (quick-sort (smaller-items in-lon pivot)) (list pivot) (quick-sort (larger-items in-lon pivot)) )))))
smaller-items & larger items smaller-items is a function that takes in a list of numbers and target item and creates a list of numbers that contains only elements that are smaller than the target item. larger-items is a function that takes in a list of numbers and target item and creates a list of numbers that contains only elements that are larger than the target item. My goodness, those functionalities are awfully similar… Maybe you should ponder this…
Fractals Perhaps one of the best-known (or at least most widely recognized) generative recursion example lies in the idea of fractals. A formal definition of fractal is a geometrical figure consisting of an identical motif that repeats itself on an ever decreasing scale. The images produced by fractals range greatly…
But how does it work? Back to the reality of the situation, how does this work and why is it called “generative recursion”.
At each step… At each step in the recursive process, we “generate” a new set of data based on established rules. At each step, we check to see if we’ve met a termination condition set up by our algorithm.
Example: Sierpinski’s Triangle Many of you may have already been exposed to Sierpinski’s triangle. It has a very simple set of rules:
Given a triangle… Given a triangle with corners A, B, & C, calculate the coordinates of the midpoints of each side of the triangle. Draw lines connecting each of these midpoints. For each new triangle generated except for the center triangle, repeat the process. Do this until you can no longer draw triangles (you can’t see them).
We determine the midpoints of each of the three sides through simple mathematics
We repeat our algorithm for each of the “corner triangles”, generating new images.