Module 5 AI in Robotics PDF
Document Details
Uploaded by ReachableNeodymium6990
Tags
Summary
This document provides an overview of Module 5, AI in Robotics. It details the different types of robots, including manipulators, mobile robots, and hybrid robots. It also explores challenges in real-world robotics and the various types of sensors used in robots. Furthermore, it introduces concepts like localization, path planning, and different techniques such as Monte Carlo localization and Kalman filters.
Full Transcript
MODULE 5 AI IN ROBOTICS BRI503 Introduction Robots are physical agents that perform tasks by manipulating the physical world. They are equipped with – Effectors such as legs, wheels, joints, and grippers. Their purpose is to assert physical forces on the environment. – sensor...
MODULE 5 AI IN ROBOTICS BRI503 Introduction Robots are physical agents that perform tasks by manipulating the physical world. They are equipped with – Effectors such as legs, wheels, joints, and grippers. Their purpose is to assert physical forces on the environment. – sensors, which allow them to perceive their environment. Present day robotics employs a diverse set of sensors, such as cameras and ultrasound to measure the environment, and gyroscopes and accelerometers to measure the robot's own motion. Robots fall into three main categories: – Manipulators – Mobile robot – Hybrid robot Manipulator Robots Manipulators are also known as robot arms. Anchored to a workplace (e.g., factory assembly lines, the International Space Station). Feature controllable joint chains allowing precise placement of effectors. Commonly used in industries, hospitals (for assisting surgeons), and even art creation. Applications: – Assisting surgeons in hospitals. – Critical for car manufacturing. – Capable of generating artwork. Mobile Robots Mobile Robots move using wheels, legs, or similar mechanisms. Applications: – Delivering food in hospitals. – Moving containers at loading docks. – Driverless vehicles (e.g., NAVLAB for highway navigation). – Unmanned aerial vehicles (UAVs) for surveillance and crop spraying. – Autonomous underwater vehicles (AUVs) for deep-sea exploration. – Planetary rovers like Sojourner. Hybrid robot Combine mobility with manipulators (e.g., humanoid robots mimicking human torsos). Hybrids can apply their effectors further afield than anchored manipulators can, but their task is made harder because they don't have the rigidity that the anchor provides Example: – Humanoid robots, resembling the human torso. – Prosthetics: Artificial limbs, eyes, and ears for humans. – Intelligent Environments: Sensor-equipped spaces like smart homes. – Multibody Systems: Swarms of small robots working together. Advantage: Wider reach compared to anchored manipulators. Challenge: Lack the stability of anchored systems. Challenges in Real-World Robotics Environment : They are partially observable, stochastic, dynamic, and continuous. – Partially observable: Robots cannot see everything (e.g., around corners). – Stochastic: Motion errors due to issues like friction and gear slips. Dynamic and Continuous Nature: – Real environments operate in real time, unlike simulations. – Learning through real-world trials is slower and riskier than in simulations. Safety and Efficiency: – Robots must integrate prior knowledge of tasks, environments, and their own limitations. – This allows them to learn efficiently and operate safely without repeated errors. Types of sensors Passive Sensors – They capture signals that are generatedby other sources in the environment. – Example: Camera Active sensors – They send energy into the environment. They rely on the fact that this energy is reflected back to the sensor. – They have increased power consumption and with a danger of interference when multiple active sensors are used at the same time. Whether active or passive, sensors can be divided into three types, depending on whether they record distances to objects, entire images of the environment, or properties of the robot itself. Sensor Classifications 1. Range Sensors (Distance Measurement): – Sonar Sensors: – Emit directional sound waves; measure distance from reflected sound. – Used in Underwater-AUVs and on Land for near-range collision avoidance. – Radar: Used mainly by aircraft. – Laser Range Finders: Accurate for short and long distances. – Close-Range Sensors: Tactile sensors like whiskers, bump panels, and touch- sensitive skins. – GPS (Global Positioning System):Measures distances to satellites via pulsed signals. They provides absolute location outdoors with accuracy up to a few meters. Differential GPS achieves millimeter accuracy under ideal conditions but are ineffective indoors or underwater. Imaging Sensors (Environment Imaging): Proprioceptive Sensors (Robot’s Internal State): 2. Imaging Sensors (Environment Imaging): – The cameras that provide images of the environment and, using the computer vision techniques models and features of the environment. – Stereo vision capture depth information; Now new active technologies for range imaging are being developed successfully 3. Proprioceptive sensors : inform the robot of its own state. – To measure the exact configuration of a robotic joint, motors are often equipped with shaft decoders that count the revolution of motors in small increments. – Shaft decoders that report wheel revolutions can be used for odometry the measurement of distance travelled. – Inertial Sensors :Gyroscopes for orientation tracking. But the positional uncertainty accumulates over time. – Force and Torque Sensors: Measure forces in translational and rotational directions. – They are used in handling fragile objects(allow the robot to sense how hard it is gripping the bulb) and adapting to unknown object shapes and locations. Effectors Effectors are the means by which robots move and change the shape of their bodies. Effectors are designed with the concept of degree of freedom (DOF). Degrees of freedom refer to the independent movements possible within a robot's joints or body. – count one degree of freedom for each independent direction in which a robot, or one of its effectors, can move – AUV has six degrees of freedom, three for its (x, y, z ) location in space and three for its angular orientation, known as yaw, roll, and pitch. Theses define the kinematic state or pose of the robot For non rigid bodies there are additional degrees of freedom – like human arm, human arm as a whole has more than six degrees of freedom – the elbow has one degree of freedom-it can flex in one direction – the wrist has three degrees of freedom-it can move up and down, side to side, and can also rotate. – The arm has exactly six degrees of freedom, created by five revolute joints and One PRISMATIC JOINT generate rotational motion and one prismatic joint that generates sliding motion. Mobile robots – Consider, for example, your average car: it can move forward or backward, ,and it can turn, giving it two DOFs. – In contrast, a car's kinematic configuration is three-dimensional: on an open flat surface, one can easily maneuver a car to any (x, y) point, in any orientation. – Thus, the car has 3 effective degrees of freedom but 2 controllable degrees of freedom. Non Holonomic Robot: if it has more effective DOFs than controllable DOFs – Eg: Cars Holonomic Robot: IF both effective DOFs and controllable DOFs are equal – They are easier to control but mechanically complex(easier to park a car that could move sideways as well as forward and backward) – Eg:Robot arms. Mobile Robots For mobile robots, there exists a range of mechanisms for locomotion, including wheels, tracks, and legs. Differential drive robots possess two independently actuated wheels (or tracks), one on each side, as on a military tank. – If both wheels move at the same velocity, the robot moves on a straight line. – If both wheels move at the same velocity, the robot moves on a straight line. Synchro drive: each wheel can move and turn around its own axis. Constraint is that all wheels always point in the same direction and move at the same speed. Legged Robots: Can handle very rough terrain. The legs are notoriously slow on flat surfaces, and they are mechanically difficult to build. Robotics researchers have tried designs ranging from one leg up to dozens of legs. Legged robots have been made to walk, run, and even hop-as we see with the legged robot. This robot is dynamically stable, meaning that it can remain upright while hopping around. Robot that can remain upright without moving its legs is called statically stable. Airborne Robots : Use propellers/turbines (e.g., drones, blimps). Underwater Robots : Use thrusters (e.g., submarines, AUVs). Power and Actuation mechanisms: – The electric motor is the most popular mechanism for both manipulator actuation and locomotion. – Pneumatic actuation uses compressed gas – Hydraulic actuation uses pressurized fluids also have their application niches. Additional Robot Components – Communication: Wireless networks for control and data transfer. – Framework: Structural body for integrating components. – Emergency Tools: Soldering iron and maintenance tool Robotic Perceptron Perception is the process by which robots map sensor measurements into internal representations of the environment. Perception is difficult because in general the sensors ,are noisy, and the environment is partially observable, unpredictable, and often dynamic. Good internal representations have three properties: – They contain enough information for the robot to make the right decisions. – They are structured so that they can be updated efficiently, and they are natural in the sense that internal variables correspond to natural state variables in the physical world. For robotics problems, we usually include the robot's own past actions as observed variables in the model. Robot perception can be viewed as temporal inference from sequences of actions and measurements Dynamic Bayes network Xt is the state of the environment (including the robot) at time t, Zt is the observation received at time t, and At is the action taken after the observation is received. The task is to compute the new belief state, P(Xt+1 | z1:t+1, a1:t), from the current belief state P(Xt+1| z1:t+1, a1:t) and the new observation z t+1. Motion Model: Describes how a robot's position changes over time based on control inputs (e.g., speed, direction). Sensor Model: Describes how robots perceive the environment and themselves using sensors Localization Determining where objects or the robot itself are located in the environment. Localization is one of the most pervasive perception problems in robotics, because knowledge about where things are is necessary for physical interaction of the robots. Robot manipulators need object locations. Navigating robots must determine their position to reach goals. A mobile robot navigates an indoor hallway with marked walls. – The robot’s sensor detects a wall corner 5 meters away at a 30° bearing. – This data, compared to a stored map, helps calculate the robot’s exact position in the hallway. Landmarks are fundamental in various localization systems, especially in structured environments or when GPS is unavailable. Types of Localization Problems – Tracking Problem: Initial pose of the object is known. It is characterized by bounded uncertainty (narrow range of possible positions). – Global Localization Problem: Initial location of the object is unknown. Involves managing broad uncertainties until the object is localized. Once localized, turns into a tracking problem. – Kidnapping Problem: Object/robot is moved unpredictably (e.g., simulated "kidnapping").Tests the robustness of a localization algorithm under extreme conditions. Motion model for robots If the robot moves slowly in a plane then it is given an exact map of the environment. The pose of such a mobile robot is defined by its two Cartesian coordinates with values x and y and its heading with value θ. If we arrange those three values in a vector, then any particular state is given by State Vector: Xt = (xt, yt, θt). Motion Model for Localization Inputs: – Translational velocity: vt – Rotational velocity: wt Deterministic Model: For small time intervals (Δt): Sensor model There are two types of sensor models: The first assumes that the sensors detect stable, recognizable features of the environment called landmarks. Landmarks: Stable, recognizable features in the environment that are used by a robot to determine its position. For each landmark, the range and bearing are reported. The robot's state is represented as , where xt ,yt : Cartesian coordinates of the robot. 𝜃𝑡: Orientation of the robot. Using basic geometry, the range rand bearing ϕ to the landmark can be calculated as In real-world scenarios, measurements are affected by Gaussian noise Second sensor model Range scanners (e.g., LIDAR) provide a vector of range values given by where zt: Range measurement along the jth beam direction relative to robot. Prediction of Range Values Given the robot's pose xt, the exact range along the jth beam is computed as: zj=distance from xt to the nearest obstacle in the jth beam direction. we assume that the errors for the different beam directions are independent and identically distributed, so we have Following is the example of a four-beam range scan and two possible robot poses,one of which is reasonably likely to have produced the observed scan and one of which is not. Localization Techniques Monte Carlo Localization (MCL): Overview: – MCL uses particle filtering to estimate a robot's location. – Particles represent possible robot states and are updated using a motion model and a sensor model. Steps: – Initialization: Particles are uniformly distributed initially, representing global uncertainty. – Sensor Update: Measurements are used to assign weights to particles based on the sensor model. – Resampling: Particles are resampled, keeping those with higher weights. – Iteration: Process repeats as new sensor data arrives, refining the estimate. Kalman Filters Overview: – Kalman Filters assume the robot's belief is a Gaussian distribution with mean μ and covariance Σ. – Suitable for systems with linear motion and measurement models. Process: – As the robot moves, uncertainty increases (reflected by larger covariance). – Sensing known landmarks reduces uncertainty (error ellipses shrink). – If the robot loses sight of landmarks, uncertainty increases again. Simultaneous Localization and Mapping (SLAM) SLAM is the problem where a robot must localize itself and simultaneously construct a map of an unknown environment. The environment is assumed to be fixed for simplicity, but the problem becomes more complex if the environment changes dynamically. Extended Kalman Filter (EKF) for SLAM: – The EKF represents the posterior distribution as a Gaussian. – The mean (𝜇𝑡 ) vector contains the robot's pose and the locations of all detected landmarks. – The covariance matrix (Σ𝑡 ) tracks uncertainties and correlations between the robot's pose and landmarks. SLAM builds upon localization techniques, incorporating the mapping of landmarks. EKF-SLAM is widely used but relies on distinguishable landmarks. Planning to move The point-to-point motion problem is to deliver the robot or its end-effector to a designated target location. Compliant motion : robot moves while being in physical contact with an obstacle. An example of compliant motion is a robot manipulator that screws in a light bulb, or a robot that pushes a box across a table top. Configuration space- the space of robot states defined by location, orientation, and joint angles-is a better place to work than the original 3D space. The path planning problem is to find a path from one configuration to another in configuration space. In robotics, the primary characteristic of path planning is that it involves continuous spaces. The major families of path planning are: – cell decomposition – skeletonization