Autonomous Systems Lecture 3 - Simple and Complex Sensors PDF
Document Details
Uploaded by Deleted User
University of Groningen
Paul Vogt
Tags
Summary
This document is lecture notes from the University of Groningen on Autonomous Systems Lecture 3, covering Simple and Complex Sensors. It details different types of sensors, including active and passive sensors, simple and complex sensors, and methods for sensing. The notes include examples of various sensors and their applications in robotics.
Full Transcript
The material in this lecture is based on prior work by dr. Marco Wiering, dr. Matias Valdenegro, Henry Maathuis, Jelle Visser, and Ben Wolf Autonomous Systems Lecture 3 – Simple and Complex Sensors Paul Vogt Previous lecture After the previous lecture students will be able to...
The material in this lecture is based on prior work by dr. Marco Wiering, dr. Matias Valdenegro, Henry Maathuis, Jelle Visser, and Ben Wolf Autonomous Systems Lecture 3 – Simple and Complex Sensors Paul Vogt Previous lecture After the previous lecture students will be able to understand the basics of: What is in a robot? sensors e ectors and actuators controllers Degrees of Freedom (DOF) Robot Locomotion legged locomotion gaits wheeled locomotion Trajectory planning UBTech Alpha Mini robot ff https://en.wikipedia.org/wiki/Reinforcement_learning Todays lecture The goal for today is to Outline sensors commonly used in Robotics Understand the di erence between simple and complex sensors Understand how sensors are used for perception Introduce concepts from machine vision Show some examples UBTech Alpha Mini robot ff Overview Simple and Complex Sensors › Sensing and Terminology › Simple and Complex Sensors › Machine Vision Basics › Examples › Face recognition › Emotion recognition › Gesture recognition Robot Perception › In the real world, the robot perceives Sensing ▪ the environment (exteroception) ▪ other agents and possible actions ▪ Itself (proprioception) ▪ where am I, what is my state? Environment State Controller representation › But why? ▪ To know the state ▪ To know the actions we can do ▪ To estimate rewards ▪ To gauge priorities Actions Sensor terminology › Active vs Passive ▪ Active sensors emit a signal that interacts with the environment, and measures that interaction ▪ Passive sensors measure physical properties of the environment, without direct interaction ▪ Active sensors emit energy whereas passive sensors do not. › Simple vs Complex ▪ Simple sensors provide (usually 1D) data that does not require further processing or interpretation to be useful to the robot ▪ Complex sensors provide (usually multidimensional) data that requires sophisticated processing to be useful to the robot Switches › Physical sensor that produces a binary output signal ▪ Works by restricting the passage of an electric current › Used for detecting collisions and motion limits › Active or Passive? › Simple or Complex? Light-sensitive diode › Photocell that produces a signal when exposed to light, either through voltage or resistance. This is known as the photoelectric e ect. ▪ Observes luminance in a de ned range, output is continuous but can be binarized. › Used for Braitenberg vehicles, daylight sensing, reading optical media fi ff Sensing in Braitenberg-like vehicle https://en.wikipedia.org/wiki/Rotary_encoder Position sensors › They sense a linear or angular position or velocity through ▪ A potentiometer, producing a continuous output ▪ Observing a printed pattern that encodes rotation or position › These are used for odometry (wheel position/rotations) and determining joint positions. https://en.wikipedia.org/wiki/Rotary_encoder Position sensors › They sense a linear or angular position or velocity through ▪ A potentiometer, producing a continuous output ▪ Observing a printed pattern that encodes rotation or position › These are used for odometry (wheel position/rotations) and determining joint positions. https://en.wikipedia.org/wiki/Rotary_encoder Position sensors › How do you get rotation from position? ▪ by adding rings with di erent codes > bit encoding ▪ adding more bits > higher precision ff Servo – sensing and actuating › Both measures and controls the rotational position ▪ The unit has a rotary encoder sensor, a gearbox, and a motor as actuator › How it works? ▪ The servo receives a control signal, e.g. a voltage which encodes the desired position ▪ The rotary encoder is used to measure the current position ▪ The di erence is used to actuate the motor › They are used for ▪ Lowering and raising a pen in 2d plotter ▪ Actuating other small components ff Accelerometers › Measures acceleration using Newton’s second law of motion, F = ma. ▪ Conceptually, a calibrated weight connected to a spring moves inside the device ▪ When the device accelerates, the position of the mass delays and is tracked › This device is used for ▪ Velocity measurements (integration) ▪ Orientation estimation ▪ Position estimation (start and end of motion) https://en.wikipedia.org/wiki/Gyroscope Gyroscope › Measures angular position through conservation of angular momentum ▪ A rotor spins inside a gimbal, keeping its position xed ▪ When the device rotates, the position di erence with the gimbal encodes the orientation › Usually combined with an accelerometer for better rotation estimates ff fi https://en.wikipedia.org/wiki/Gyroscope Gyroscope › Measures angular position through conservation of angular momentum ▪ A rotor spins inside a gimbal, keeping its position xed ▪ When the device rotates, the position di erence with the gimbal encodes the orientation › Usually combined with an accelerometer for better rotation estimates ff fi Ultrasonic and sonar › Bioinspired sensing capability, borrowed from bats and dolphins ▪ Uses sound frequencies beyond human hearing ▪ Usually has an emitter and receiver ▪ Signals needs to be processed to provide meaningful information › Used for echolocation of nearby objects Ultrasonic and sonar › Bioinspired sensing capability, borrowed from bats and dolphins ▪ Uses sound frequencies beyond human hearing ▪ Usually has an emitter and receiver ▪ Signals needs to be processed to provide meaningful information › Used for echolocation of nearby objects Ultrasonic and sonar › The principles of echolocation ▪ Sonars are active sensors emitting an ultrasonic chirp or ping signal ▪ They use the time-of- ight principle on detected signals ▪ distance ~ time_delay ⋅ speed of sound / 2 fl Ultrasonic and sonar › The principles of echolocation ▪ Sonars are active sensors emitting an ultrasonic chirp or ping signal ▪ They use the time-of- ight principle on detected signals ▪ distance ~ time_delay ⋅ speed of sound / 2 fl Ultrasonic and sonar › Specular re ection ▪ Signals from the emitter can bounce multiple times before being detected ▪ Especially with smooth surfaces and small angles ▪ Generates false far away readings fl Laser - light amplification by stimulated emission of radiation › Measure the re ection of a signal, but with a light beam ▪ The speed of light is so fast, we cannot use time of ight ▪ We use phase-shift information instead › The laser beam is less a ected by specular re ection ▪ Laser is a unidirectional beam instead of a sound cone ▪ With a scanning laser, a point cloud is generated. fl ff fl fl Laser - light amplification by stimulated emission of radiation › Measure the re ection of a signal, but with a light beam ▪ The speed of light is so fast, we cannot use time of ight ▪ We use phase-shift information instead › The laser beam is less a ected by specular re ection ▪ Laser is a unidirectional beam in stead of a sound cone ▪ With a scanning laser, a point cloud is generated. fl ff fl fl Scanning laser vs Lidar › (2D) scanning laser is the old name for a LiDAR sensor ▪ LiDARs make use of a scanning laser beam › Example of lidar in practice ▪ Lidar for autonomous driving Vision › The camera is a biomimetic sensing device mimicking a human eye ▪ There are some key di erences between them ▪ Resolution is not uniform in eyes ▪ Luminance (120M) and color (6M) are measured separately in eyes ▪ Some contrast processing happens in eyes, but not in CMOS cameras › The book categorises two levels of visual sensing ▪ Early vision – a representation of the image (picture) ▪ High-level vision – any further processing › The produced data by a camera is complex ▪ It is multidimensional: RGB-channels ▪ A single picture/frame contains a lot of pixels ff Depth Sensing › There are several devices that produce RGBD images and videos ▪ A common sensing device in robotics is the Kinect (Xbox, Azure, discontinued) ▪ Uses an infrared pattern (left) to estimate depth (visualized right) Depth Sensing › There are several devices that produce RGBD images and videos ▪ A new more recent depth camera is the Intel RealSense ▪ Uses stereo vision Sensor terminology recap › Active vs Passive ▪ Active sensors emit a signal that interacts with the environment, and measures that interaction ▪ Passive sensors measure physical properties of the environment, without direct interaction ▪ Active sensors emit energy whereas passive sensors do not. › Simple vs Complex ▪ Simple sensors provide (usually 1D) data that does not require further processing or interpretation to be useful to the robot ▪ Think: switches, light sensors, laser, encoders, and accelerometers. ▪ Complex sensors provide (usually multidimensional) data that requires sophisticated processing to be useful to the robot ▪ Think: ultrasound, laser, camera, radar, and GPS. The material in this lecture is based on prior work by dr. Marco Wiering, dr. Matias Valdenegro, Henry Maathuis, and Jelle Visser Overview Simple and Complex Sensors › Sensing and Terminology › Simple and Complex Sensors › Machine Vision Basics › Examples › Face recognition › Emotion recognition › Gesture recognition Machine vision use cases › Detect and classify objects ▪ Model-based vision ▪ Manipulation tasks ▪ Localisation and navigations tasks › Face recognition ▪ Recognise humans and emotions ▪ Human-robot interaction › Motion vision ▪ Amplify di erences between frames to direct focus › Stereo vision ▪ Secondary depth perception ▪ Expanding eld of view ff fi Simplifying vision › You don’t always need all information in a single picture / frame ▪ How do you track a soccer ball on a football pitch? ▪ Filter for the color ▪ Filter for motion ▪ And combine it into a color blob detector! https://medium.com/neurosapiens/segmentation-and-classi cation-with-hsv-8f2406c62b39 Simplifying vision › You don’t always need all information in a single picture / frame ▪ How do you track a soccer ball on a football pitch? ▪ Filter for the color ▪ Filter for motion ▪ And combine it into a color blob detector! fi Complementing vision › You can combine vision with other elements ▪ Other sensors – sensor fusion ▪ Generally more complex, but also more discrimination power ▪ E ectors and/or manipulators – active sensing ▪ Reduce uncertainty by having multiple perspectives ▪ Knowledge about the world – ltered sensing ▪ Allows for simpler processing ff fi Interpreting vision › How to get from a camera (supported by other sensors) to knowledge? ▪ Use the signals directly – low level features, raw data ▪ Use feature extraction – high-level features, processed data What kind of features are useful? › There are ve major criteria for selecting the right type of feature ▪ What is the task at hand? ▪ Imagine a robot navigating an o ce environment from point A to B ▪ What is distinctive in the target environment? ▪ In an o ce walls, doors, and oor lines could be helpful ▪ Which sensors are available? ▪ Camera, laser or sonar? ▪ How computational intensive is the feature? ▪ A deep neural network is slower than a color blob tracker ▪ Which representation matches the target environment? ▪ Raw pixels are less useful than lines in an image ffi fi fl ffi Example features – range data (sonar, laser) › Line extraction Example features – range data (sonar, laser) › Line extraction › Corners › Gaps (doorway?) › Cylinders (human leg?) Example features – camera data › First type: Spatially localised in the image ▪ Edges / outlines of objects ▪ Points of interest ▪ Lines / planes › Second type: whole-image features ▪ Brightness ▪ Variance ▪ Hues Spatially localised features › Sobel edge detection Spatially localised features › Sobel edge detection ▪ A convolution operator is used on the original image, A A ▪ We can combine Gx and Gy via G Spatially localised features › Sobel edge detection ▪ A convolution operator is used on the original image, A A ▪ We can combine Gx and Gy via G Neural Network-based vision nowadays using deep neural networks › Classi cation – what is it? › Object detection – is there something? › Localisation – where is something? › Segmentation -- what belongs to something? Source: Wikipedia – credit Aphex341 fi Neural Network-based vision nowadays using deep neural networks › Classi cation – what is it? › Object detection – is there something? › Localisation – where is something? › Segmentation -- what belongs to something? fi The material in this lecture is based on prior work by dr. Marco Wiering, dr. Matias Valdenegro, Henry Maathuis, and Jelle Visser Overview Simple and Complex Sensors › Sensing and Terminology › Simple and Complex Sensors › Machine Vision Basics › Examples › Face recognition › Emotion recognition › Gesture recognition Face recognition detect face localise facial features apply convolutional NN compare with database Facial emotion recognition Charade games Inspired by language games (Steels) de Wit, J., Krahmer, E. & Vogt, P. (2021) Introducing the NEMO-Lowlands iconic gesture dataset, collected through a gameful human–robot interaction. Behavioral Research Methods 53, 1353–1370 Charades game Guessing Charades game Guessing Feature extraction Sensing classi cation Adaptation “I think it is a guitar” fi Charades game Producing winner-takes-all lookup m1 m2 m3 “horse” horse guitar 2 3 5 1 1 6 monkey 6 2 4 pig 0 4 1 Charades game Producing execution winner-takes-all Adaptation lookup m1 m2 m3 “horse” horse guitar 2 3 5 1 1 6 monkey 6 2 4 pig 0 4 1 Summarizing › Sensors can be simple or complex and active or passive ▪ Determining the type of sensor depends on the application and available processing power › Vision and sensor fusion require complex processing ▪ High dimensional data is reduced to meaningful features ( ve criteria) › Questions? fi https://en.wikipedia.org/wiki/Reinforcement_learning Next lecture Next week, Matthia Sabatelli will lecture about Genetic Programming The week after, Matthew Cook will lecture about Cellular Automata I will be back in three weeks, talking about Control Architectures UBTech Alpha Mini robot