800 likes | 942 Views
Introduction to Robotics. 4. Mathematics of sensor processing. Examples. Location Dead Reckoning. Odometry using potentiometers or encoders. Steering : Differential, Ackerman Inertial Navigation Systems (INS). Optical Gyro. Resonant Fiber Optic Gyro. Ranging Triangulation
E N D
Introduction to Robotics 4. Mathematics of sensor processing.
Examples • Location • Dead Reckoning. • Odometry using potentiometers or encoders. • Steering : Differential, Ackerman • Inertial Navigation Systems (INS). • Optical Gyro. • Resonant Fiber Optic Gyro. • Ranging • Triangulation • MIT Near IR Ranging • Time of flight
Potentiometersor pots • Low cost rotational displacement sensors in applications of: • low speed • medium accuracy • not involving continuous rotation • Errors due to • poor reliability due to dirt • frictional loading impact of the shaft • electrical noise, etc. • Use has fallen off in favor of versatile incremental optical encoders.
Dead Reckoning • Definition: is a simple mathematical procedure for determining the present location of a vessel (vehicle) by advancing some previous position through known course and velocity information. • Most simplistic implementation is termed odometry. • Odometry sensors: * potentiometers * encoders: brush, optical, magnetic, capacity, inductive
Introduction to Odometry Given a two wheeled robot, odometry estimates position and orientation from left and right wheel velocities as a function of time. B = the wheel separation
Differential Steering • Two individually controlled drive wheels • Enables the robot to spin in place • Robot displacement D and velocity V along the path of travel are , Displacement and velocity of the left wheel Displacement and velocity of the right wheel , = circumference of circle traveled by left wheel
Differential Steering Solving for : Similarly, yields: Solving for b= d in the denominator is a significant source of error since, due to the uncertainties associated with the effective point of contact of the tires.
Over an infinitesimal time increment, the speed of the wheels can be assumed constant => path has constant radius of curvature:
Differential Steering. Drive controller = wheel rotation = effective left wheel radius = number of counts left encoder = number of counts per wheel revolution And a similar relation for the right wheel. The drive controller will attempt to make the robot travel a straight line by ensuring and are the same. Not an accurate method, since effective wheel radius is a function of the compliance of the tire and the weight (empirical values, tire compliance is function of wheel rotation.
Differential Steering. Other reasons forinaccuracies. In climbing over a step discontinuity of height h, the wheel rotates and so the perceived distance differs from the actual distance traveled This displacement differential between left and right drive wheels result in an instantaneous heading change Floor slippage: this problem is especially noticeable in exterior implementations known as skid steering. Routinely implemented in bulldozers and armored vehicles. Skid steering is employed only in teleoperated vehicles.
Ackerman Steering. The method of choice for outdoor autonomous vehicles. • Used in order to provide fairly accurate dead-reckoning solution, while supporting the traction and ground clearance needs of all-terrain operations. • Is designed to ensure that when turning, the inside wheel is rotated to a slightly sharper angle than the outside wheel, thereby eliminating geometrically induced tire slippage. Ackerman equation: = relative steering angle of inner/ outer wheel l = longitudinal wheel separation d = lateral wheel separation • Examples include: • HMMWV based Teleoperated Vehicle (US Army) Program. • MDARS (Mobile Detection Assessment and Response System) Exterior - autonomous patrol vehicle.
Ackerman Steering. l d x x = distance from inside wheel to center of rotation
Inertial Navigation • Continuous sensing of acceleration in each of 3D axes, and integrating over time to derive velocity and position. • Implementations are demanding from the standpoint of minimizing the various error sources. • High quality navigational systems have a typical drift of 1nm/h and only a few years ago were used to cost $50K-$70K. • High end systems which perform better than 0.1% of the distance traveled and they used to cost $100K - $200K. • Today, relatively reliable equipment to be used for UGV navigation costs starting $5K. • Low cost fiber optic gyros and solid state accelerometers were developed for INS.
Gyroscopes Mechanical gyroscopes operate by sensing the change in direction of some actively sustained angular or linear momentum. A typical two-axis flywheel gyroscope senses a change in direction of the angular momentum associated with a spinning motor.
Optical Gyroscopes • Principle first discussed by Sagnac (1913). • First ring laser gyro (1986) used He-Ne laser. • Fiber optic gyros (1993) installed in Japanese automobiles in the 90s. • The basic device: • two laser beams traveling in opposite directions (i.e. counter-propagating) around a closed loop path. Standing wave created by counter-propagating light beams. Schulz-DuBois idealization model • constructing and destructive interference patterns • can be formed by splitting off and mixing a portion of the two beams. • used to determine the rate and direction of rotation of the device.
Active Ring-Laser Gyro • Introduces light into the doughnut by filling the cavity with an active lasing medium. • Measures the change in path lengthL as function of the angular velocity of rotation , radius of the circular beam path r and speed of light c. L = 2 4r Sagnac effect c • For lasing to occur in a resonant cavity, the round trip beam path must precisely equal in length to an integral number of wavelengths at the resonant frequency. • The frequencies of the two counter-propagating waves must change, as only oscillations with wavelength satisfying the resonance condition can be sustained in the cavity.
Active Ring-Laser Gyro • For an arbitrary cavity geometry with an area A enclosed by the loop beam path and perimeter of the beam path L: • f = 4A L • The fiber glass forms an internally reflective waveguide for optical energy. • Multiple turns of fiber may be an implementation of doughnut shape cavity and result with path change due to Sagnac effect, essentially multiplied by N, number of turns.
Open Loop Interferometer Fiber Optic GyroIFOG n= reflective index Speed of light in medium As long as the entry angle is less than a critical angle, the ray is guided down the fiber virtually without loss. NA = is the numerical aperture of the fiber = index of reflection of cladding = index of reflection of glass core We need a single mode fiber, so only the counter-propagating waves can exist. But in such a fiber light may randomly change polarization states. So, we need a special polarization-maintaining fiber.
Open Loop IFOG Is the number of fringes of phase shift due to gyro rotation • Advantages: • reduced manufacturing costs • quick start-up • good sensitivity • Disadvantages: • long length of optical fiber required • limited dynamic range in comparison with active ring-laser gyros • scale factor variations Used in automobile navigation, pitch and roll indicators, and altitude stabilization.
Resonant Fiber - Optic Gyros. • Evolved as a solid state derivative of the passive ring gyro, which makes use of a laser source external to the ring cavity. • A passive resonant cavity is formed from a multi-turn closed loop of optical fiber. • Advantages: high reliability, long life, quick start-up, light weight, up to 100 times less fiber. • Input coupler injects frequency modulated light in both directions. • In the absence of loop rotation, maximum coupling occurs at the resonant frequency. • If the loop rotates the resonant frequency must shift. f = D n
Ranging • Distance measurement techniques: • triangulation • ToF: time of flight (pulsed) • PhS: phase shift measurement (CW: continuous wave) • FM: frequency modulation (CW) • interferometry • Non contact ranging sensors: • active: • Radar -ToF, PhS, FM • Sonar - ToFspeed of sound slow in water. • Lidar - laser based ToF, PhS • passive
GPS: Navstar Global Positioning System • 24 satellite based system, orbiting the earth every 12h at an altitude of 10,900nm. • 4 satellites located in each of 6 planes inclining 55deg. with respect to earth’s equator. • Absolute 3D location of any GPS receiver is determined by trilateration techniques based on time of flight for uniquely coded spread - spectrum radio signals transmitted by the satellites. • Problems: • time synchronization and the theory of relativity. • precise real time location of satellites. • accurate measurement of signal propagation time. • sufficient signal to noise ratio
GPS: Navstar Global Positioning System • spread - spectrum technique: each satellite transmits a periodic pseudo random code on two different L band frequencies (1575.42 and 1227.6 MHz) • Solutions: • time synchronization. atomic clocks • precise real time location of satellites. individual satellite clocks are monitored by dedicated ground tracking stations and continuously advised of their measurement offsets from official GPS time. • accurate measurement of signal propagation time. a pseudo-random code is modulated onto the carrier frequencies. An identical code is generated at the receiver on the ground. The time shift is calculated from the comparison using the forth satellite.
GPS: Navstar Global Positioning System • The accuracy of civilian GPS is degraded 300m, but there are quite a few commercial products, which significantly enhance the above mentioned accuracy. • The Differential GPS (DGPS) concept is based on the existence of a second GPS receiver at a precisely surveyed location. • We assume that the same correction apply to both locations. • Position error may be reduced well under 10m. • Some other up-to-date commercial products claim on accuracy of several cm.
Compact Outdoor Multipurpose POSE (Position and Orientation Estimation) Assessment Sensing System (COMPASS) • COMPASS is a flexible suite of sensors and software integrated for GPS and INS navigation. • COMPASS consists of a high-accuracy, 12-channel, differential Global Positioning System (GPS) with an integrated Inertial Navigation System (INS) and Land Navigation System (LNS). • This GPS/INS/LNS is being integrated with numerous autonomous robotic vehicles by Omnitech for military and commercial applications. • COMPASS allows semiautonomous • operation with multiple configurations • available.
Triangulation • Active: employing • a laser source illuminating the target object and • a CCD camera. Calibration targets are placed at known distances z1 and z2. Point- source illumination of the image effectively eliminates the correspondence problem.
Triangulation by Stereo vision Based on the Law of Sines, assuming the measurement is done between three coplanar points. • Passive: Stereo vision measured angles (,) from two points (P1,P2) located at known relative distance (A). • Limiting factors: • reduced accuracy with increasing range. • angular measurement errors. • may be performed only in the stereo observation window, because of missing parts/ shadowing between the scenes.
Triangulation by Stereo vision • Horopter is the plane of zero disparity. • Disparity is the displacement of the image as shifted between the two scenes. • Disparity is inversely proportional with the distance to the object. • Basic steps involved in stereo ranging process: • a point in the image of one camera must be identified. • the same point must be located in the image of the other camera. • the lateral position of both points must be measured with respect to a common reference. • range Z is then calculated from the disparity in the lateral measurements.
Triangulation by Stereo vision • Correspondence is the procedure to match two images. • Matching is difficult in regions where the intensity of color is uniform. • Shadows in only one image. • Epipolar restriction - reduce the 2D search to a single dimension. • The epipolar surface is a plane defined by the lens center points L and R and the object of interest at P.
MIT Near IR Ranging • One dimensional implementation. • Two identical point source LEDs placed a known distance “d” apart. • The incident light is focused on the target surface. • The emitters are fired in sequence. • The reflected energy is detected by a phototransistor. • Since beam intensity is inversely proportional with the distance traveled: Assumption: the surface is perfectly defusing the reflected light (Lambertian surface) and the target is wider than the field of view.
Basics of Machine Vision
Vision systems are very complex. • Focus on techniques for closing the loop in robotic mechanisms. • How might image processing be used to direct behavior of robotic systems? • Percept inversion: what must be the world model to produce the sensory stimuli? • Static Reconstruction Architecture • Task A: A stereo pair is used to reconstruct the world geometry
Reconstructing the image is usually not the proper solution for robotic control. • Examples where reconstructing the image is a proper step: • medical imagery • construct topological maps • Perception was considered in isolation. • Elitism made vision researchers considering mostly their closed community interests. • Too much energy invested in building World Models.
Active Perception Paradigm. • Task B: a mobile robot must navigate across outdoor terrain. • Many of the details are likely to be irrelevant in this task. • The responsiveness of the robot depends on how precisely it focuses on just the right visual feature set.
Perception produces motor control outputs, not representations. • Action oriented perception. • Expectation based perception. • Focus on attention. • Active perception: agent can use motor control to enhance perceptional processing.
Cameras as sensors. • Light, scattered from objects in the environment is projected through a lens system on the image plane. • Information about the incoming light (e.g., intensity, color) is detected by photosensitive elements build from silicon circuits in charge-coupled devices (CCD) placed on the image plane. • In machine vision, the computer must make sense out of the information it gets on the image plane. • The lens focus the incoming light.
Cameras as sensors. • Only the objects at a particular range of distances from the lens will be in focus. This range of distances is called the camera's depth of field. • The image plan is subdivided into pixels, typically arranged in a grid (512x512) • The projection on the image plan is called image. • Our goal is to extract information about the world from a 2D projection of the energy stream derived from a complex 3D interaction with the world.
Pinhole camera model. Perspective projection geometry. Mathematically equivalent non-inverting geometry.
מדידת מרחק לעצם P על ידי שתי מצלמות P העצם P נמצא מרחק l מציר מצלמה 1 ומופיע מרחק a מציר העדשה העצם P נמצא מרחק r מציר מצלמה 2 ומופיע מרחק b מציר העדשה d l r l+r = המרחק בין המצלמות f f a b b a+b נקרא disparity מצלמה 1 מצלמה 2 d/l = (d+f)/(l+a) d = f (l+r)/(a+b) d/r = (d+f)/(r+b) המרחק לעצם P נמצא ביחס הפוך ל disparity
A simple example: stereo system encodes depth entirely in terms of disparity. 2d = distance between cameras = disparity z = distance to object
Geometrical parameters for binocular imaging. • The information needed to reconstruct the 3D geometry includes also • the kinematical configuration of the camera and • the offset from image center (optical distortions).
תרגיל מס a2 שתי מצלמות מכוונות לצלם את אותו עצם P. P • המרחק בין המצלמות 2d כמתואר בציור. • חשב את המרחקים מהעצם לכל אחד מהמצלמות ו- • הצג את פתרונך מפורט ככל יכולתך. • התעלם • מהקינמטיקה של מערכת המצלמות (תנודות יחסיות) • עוויתיים אופטיים כתלות ממרחק העצם ממרכז התמונה.
Edge detection • The brightness of each pixel in the image is proportional to the amount of light directed toward the camera by the surface patch of the object that projects to that pixel. • Image of a black and white camera • collection of 512x512 pixels, with different gray levels (brightness). • to find an object we have to find its edges: do edge detection. • we define edges as curves in the image plane across which there is significant change in the brightness. • the edge detection is performed in two-steps: • detection of edge segments/ elements called edgels • aggregation of adgels. • because of noise (all sorts of spurious peaks), we have to do first smoothing.
Smoothing: How do we deal with noise? • Convolution. • Applies a filter: a mathematical procedure, which finds and eliminated isolated picks. • Convolution, is the operation of computing the weighted integral of one function with respect to another function that has • first been reflected about the origin, and • then variably displaced. • In one continuous dimension, h(t) * i(t) =h(t-) i() d = h() i(t-) d Graphical Convolution - Discrete Time - Continuous Time
How this is done? • Integral or differential operators acting on pixels are achieved by a matrix of multipliers that is applied to each pixel as is moved across the image. • They are typically moved from left to right as you would read a book. Sobel gradient Sobel Laplacian