510 likes | 890 Views
LEGO Mindstorms NXT. SOURCES:. Carnegie Mellon Gabriel J. Ferrer Dacta Lego Timothy Friez Miha Štajdohar Anjum Gupta Group: Roanne Manzano Eric Tsai Jacob Robison. Introductory programming robotics projects. Developed for a zero-prerequisite course
E N D
LEGO Mindstorms NXT SOURCES: Carnegie Mellon Gabriel J. Ferrer Dacta Lego Timothy Friez Miha ŠtajdoharAnjum Gupta Group: Roanne Manzano Eric Tsai Jacob Robison
Introductory programming robotics projects • Developed for a zero-prerequisite course • Most students are not ECE or CS majors • 4 hours per week • 2 meeting times • 2 hours each • Students build robot outside class
Beginning activities • Bridge • Tower • LEGO Man • Organizing Pieces • Naming Pieces • Programming Robot People • Robots by instructions
Teaching Ideas • Teach mini-lessons as necessary • Gears- Power vs. Speed • Transmission of energy/motion • Using fasteners • Worm Gears • Building with bricks vs. building machines These spin: These don’t:
Project 1: Motors and Sensors (1) • Introduce motors • Drive with both motors forward for a fixed time • Drive with one motor to turn • Drive with opposing motors to spin • Introduce subroutines • Low-level motor commands get tiresome • Simple tasks • Program a path (using time delays) to drive through the doorway
First Project (2) • Introduce the touch sensor • if statements • Must touch the sensor at exactly the right time • while loops • Sensor is constantly monitored • Interesting problem • Students try to put code in the loop body • e.g. set the motor power on each iteration • Causes confusion rather than harm
First Project (3) • Combine infinite loops with conditionals • Enables programming of alternating behaviors • Front touch sensor hit => go backward • Back touch sensor hit => go forward • Braitenberg vehicles and state-machine based robots
Project 2: Mobile robot and rotation sensors (1) • Physics of rotational motion • Introduction of the rotation sensors • Built into the motors • Balance wheel power • If left counts < right counts • Increase left wheel power • Race through obstacle course
Second Project (2) if (/* Write a condition to put here */) { nxtDisplayTextLine(2, "Drifting left"); } else if (/* Write a condition to put here */) { nxtDisplayTextLine(2, "Drifting right"); } else { nxtDisplayTextLine(2, "Not drifting"); } Complete this code with various conditions and various motions
Project 3 Line Following
Line Following • Use light sensors to follow a line in the least time • Design and programming challenge • Uses looping or repeating programs • Robots appear to be ‘thinking’
The „line following” project • Objectives : • Build a mobile robot and program it to follow a line • Make the robot go „as fast as possible” • Challenges : • Different lines (large, thin, continuous, with gaps, sharp turns, line crossings, etc…) • Control algorithms for 1, 2 and 3 sensors • Real time, changing environment • Learning, adaptation • Fault tolerance, error recovery
RCX RCX RCX RCX RCX RCX Different control algorithms for different lines (large and thin line)
RCX RCX RCX RCX RCX RCX RCX Different control algorithms for 1 and 3 sensors…
The used techniques and knowledge (1) Real time constraints appear when the robot goes „as fast as possible” : • Sensor reading and information processing speed • Motor-robot inertia, wheel slipping… Fault tolerant, error recovery techniques are used when: • Unreliable sensor values • Inaccurate surface • Loosing the line…
The used techniques and knowledge (2) Initial calibration and adaptation are used in the „changing environment” : • Changes in the light intensity of the line (room lamps, robot shade, …) • Battery’s charge… „Learning” techniques can be used to determine : • How fast the robot can go (acceleration on long straight lines) • How sharply the robot should turn • How to avoid endless repetitions
Educational benefits of the „line following” project Students confronted, used and learned : • Real time constraints • Robust, fault tolerant control algorithms • Error „recovery” techniques • Robot’s learning and adaptation to the changing environment
Project 4: Drawing robot • Pen-drawer • First project with an effector • Builds upon lessons from previous projects • Limitations of rotation sensors • Slippage problematic • Most helpful with a limit switch • Shapes (Square, Circle) • Word (“LEGO
Project 5: Finding objects (1) • Finding objects • Light sensor • Find a line • Sonar sensor • Find an object • Find free space
Fourth Project (2) • Begin with following a line edge • Robot follows a circular track • Always turns right when track lost • Traversal is one-way • Alternative strategy • Robot scans both directions when track lost • Each pair of scans increases in size
Fourth Project (3) • Once scanning works, replace light sensor reading with sonar reading • Scan when distance is short • Finds freespace • Scan when distance is long • Follow a moving object
Other Projects with mobile robots • “Theseus” • Store path (from line following) in an array • Backtrack when array fills • Robotic forklift • Finds, retrieves, delivers an object • Perimeter security robot • Implemented using RCX • 2 light sensors, 2 touch sensors • Wall-following robot • Build a rotating mount for the sonar • Quantum Braitenberg Robots of Arushi Raghuvanshi • Maze Robots of Stefan Gebauer and Fuzzy robots of Chris Brawn
Project 6: Fuzzy Logic • Implement a fuzzy expert system for the robot to perform a task • Students given code for using fuzzy logic to balance wheel encoder counts • Students write fuzzy experts that: • Avoid an obstacle while wandering • Maintain a fixed distance from an object
Fuzzy Rules for Balancing Rotation Counts • Inference rules: • biasRight => leftSlow • biasLeft => rightSlow • biasNone => leftFast • biasNone => rightFast • Inference is trivial for this case • Fuzzy membership/defuzzification is more interesting
Fuzzy Membership Functions • Disparity = leftCount - rightCount • biasLeft is • 1.0 up to -100 • Decreases linearly down to 0.0 at 0 • biasRight is the reverse • biasNone is • 0.0 up to -50 • 1.0 at 0 • falls to 0.0 at 50
Defuzzification • Use representative values: • Slow = 0 • Fast = 100 • Left wheel: • (leftSlow * repSlow + leftFast * repFast) / (leftSlow + leftFast) • Right wheel is symmetric • Defuzzified values are motor power levels
Project 7. Q-Learning • Discrete sets of states and actions • States form an N-dimensional array • Unfolded into one dimension in practice • Individual actions selected on each time step • Q-values • 2D array (indexed by state and action) • Expected rewards for performing actions Q-values
Q-Learning Main Loop • Select action • Change motor speeds • Inspect sensor values • Calculate updated state • Calculate reward • Update Q values • Set “old state” to be the updated state
Calculating the State (Motors) • For each motor: • 100% power • 93.75% power • 87.5% power • Six motor states
Calculating the State (Sensors) • No disparity: STRAIGHT • Left/Right disparity • 1-5: LEFT_1, RIGHT_1 • 6-12: LEFT_2, RIGHT_2 • 13+: LEFT_3, RIGHT_3 • Seven total sensor states • 63 states overall
Action Set for Balancing Rotation Counts • MAINTAIN • Both motors unchanged • UP_LEFT, UP_RIGHT • Accelerate motor by one motor state • DOWN_LEFT, DOWN_RIGHT • Decelerate motor by one motor state • Five total actions
Action Selection • Determine whether action is random • Determined with probability epsilon • If random: • Select uniformly from action set • If not random: • Visit each array entry for the current state • Select action with maximum Q-value from current state
Calculating Reward • No disparity => highest value • Reward decreases with increasing disparity
Updating Q-values Q[oldState][action] = Q[oldState][action] + learningRate * (reward + discount * maxQ(currentState) - Q[oldState][action])
Student Exercises • Assess performance of wheel-balancer • Experiment with different constants • Learning rate • Discount • Epsilon • Alternative reward function • Based on change in disparity
Learning to Avoid Obstacles • Robot equipped with sonar and touch sensor • Hitting the touch sensor is penalized • Most successful formulation: • Reward increases with speed • Big penalty for touch sensor
Other classroom possibilities • Operating systems • Inspect, document, and modify firmware • Programming languages • Develop interpreters/compilers • NBC an excellent target language • Supplementary labs for CS1/CS2
The Tug O’ War • Robots pull on opposite ends of a 2 foot string • There are limits on mass,motors, and certain wheels • Teaches integrity, torque, gearing, friction • Good challenge for beginners • Very little programming
Drag Race • Least amount of time to cross a set distance • Straight, light fast designs • Teaches gearing, efficiency • Nice contrast to Tug O’ War • Little programming
Sprint Rally • Cross the table and return, attempting to stay within the designated path. • Challenging programming • Possibly uses sensors • Teaches precision, programming logic, prediction
Sumo-Autonomous • Robots push each other out of the ring • A ‘real’ competition • Require light sensors • Encourages efficient, robust designs • Power isn’t everything • Designs must predict unknown opponents
Sumo-Remote • Uses another RCX or tethered sensors to control • Do not use Mindstorms remote • Like BattleBots • Still requires programming • Driver skill is a factor
Other Challenge Possibilities • Weight lifting, obstacle course, tightrope walking, soccer, maze navigation, Dancing, golf, bipedal locomotion, tractor pull, and many more • Cooperative Robots • Component Design • Time-limited robot design • See the website, find more on the internet, or create your own • Create Specific rules • Predict loopholes