Industrial and Household Robotics PDF

Document Details

ImpressedAzalea

Uploaded by ImpressedAzalea

Tags

robotics industrial robots household robots engineering

Summary

This document provides a comprehensive overview of industrial and household robots. It describes the basic workings and different types of robots, explaining the concepts of robotics and highlighting the common features between robots and humans/animals. It also discusses robots in the context of the home, specifically mentioning washing machines as examples of robots.

Full Transcript

Industrial and Household Robotics Understanding Robotics You will see how the simple build techniques, when combined with a little bit of code, will result in a machine that feels like some kind of pet. You will also see how to debug it when things go wrong, which they will,...

Industrial and Household Robotics Understanding Robotics You will see how the simple build techniques, when combined with a little bit of code, will result in a machine that feels like some kind of pet. You will also see how to debug it when things go wrong, which they will, and how to give the robot ways to indicate problems back to you, along with selecting the behavior you would like to demonstrate. We will connect a joypad to it, give it voice control, and finally show you how to plan a further robot build. Before we start building a robot, it’s worth spending a little time on an introduction to what robotics really is, or what a robot is. We can explore some of the types of robots, along with the basic principles that distinguish a robot from another type of machine. We will think a little about where the line between robot and non-robot machines are, then perhaps muddy that line a little bit with the somewhat fuzzy truth. We will then look at the types of robots that people start building in the hobbyist and amateur robotics scene. ® 138 Chapter 3  Industrial and Household Robotics What does robot mean? A robot is a machine that is able to make autonomous decisions based on input from sensors. A software agent is a program that is designed to automatically process input and produce output. Perhaps a robot can be best described as an autonomous software agent with sensors and moving outputs. Or, it could be described as an electromechanical platform with software running on it. Either way, a robot requires electronics, mechanical parts, and code. The word robot conjures up images of fantastic sci-fi creations, devices with legendary strength and intelligence. These often follow the human body plan, making them an android, the term for a human-like robot. They are often given a personality and behave like a person who is in some simple way naive. Refer to the following diagram: Science fiction and real-world robots. Images used are from the public domain OpenClipArt library The word robot comes from sci-fi. The word is derived from the Czech for slave, and was first used in the 1921 Karel Capek play, Rossums Universal Robots. The science fiction author Isaac Asimov coined the word robotics as he explored intelligent robot behavior. Most real robots in our homes and industries have a few cutting edge and eye catching examples standing out. Most do not stand on two legs, or indeed any legs at all. Some are on wheels, and some are not mobile but still have many moving parts and sensors. Robots like washing machines, autonomous vacuum cleaners, fully self regulating boilers, and air sampling fans have infiltrated our homes and are part of everyday life. They aren’t threatening, and have became just another machine around us. The 3D printer, robot arm, and learning toys are a bit more exciting though. Take a look at the following diagram: ® 139 Chapter 3  Industrial and Household Robotics The robot, reduced ® 140 Chapter 3  Industrial and Household Robotics At their core, robots can all be simplified down to what is represented in the preceding diagram with outputs, such as a motor, inputs, and a controller for processing or running code. So, the basis of a robot, represented as a list, would look something like this: A robot has inputs, and sensors to measure, and sample a property of its environment A robot has outputs, motors, lights, sounds, valves, sounds, heaters, or other types of output to alter it’s environment A robot will use the data from its inputs to make autonomous decisions about how it controls its outputs Advanced and impressive robots Now you have an overview of robots in general, I’ll introduce some specific examples that represent the most impressive robots around, and what they are capable of. These robots are technical demonstrations, and with the exception of the Mars robots, have favored their closeness to human or animal adaptability and form over their practical and repeated use. Robots that look like humans and animals Take a look at the following picture and understand the similarities between robots and humans/animals: A selection of human and animal-like robots. Cog: an Mit Project, Honda ASIMO By Morio, Nao From Softbank Robotic, Boston Dynamics Atlas, Boston Dynamics BigDog (https:// commons.wikimedia.org/) What these robots have in common is that they try to emulate humans and animals in the following ways: The first robot on the left is Cog, from the Massachusetts Institute of Technology. Cog attempted to be human-like in its movements and sensors. ® 141 Chapter 3  Industrial and Household Robotics The second robot is the Honda ASIMO, which walks and talks a little like a human. ASIMO’s two cameras perform object avoidance, and gestures and face recognition, and have a laser distance sensor to sense the floor. It can follow marks on the floor with infrared sensors. ASIMO is able to accept voice commands in English and Japanese. ® 142 Chapter 3  Industrial and Household Robotics The third robot in this selection is the Nao robot from Softbank Robotics. This rather cute, 58 cm tall robot was designed as a learning and play robot for users to program. It has sensors to detect its motion, including if it is falling, and ultrasonic distance sensors to avoid bumps. Nao uses speakers and a microphone for voice processing. Nao includes multiple cameras to perform similar feats to the ASIMO. The fourth robot is Atlas from Boston Dynamics. This robot is speedy on two legs and is capable of natural looking movement. It has a laser radar (LIDAR) array, which it uses to sense what is around it to plan and avoid collisions. The right-most robot is the Boston Dynamics BigDog, a four legged robot, or quadruped, which is able to run and is one of the most stable four legged robots, capable of being pushed, shoved, and walking in icy conditions while remaining stable. We will incorporate some features like these in the robot we will build, using distance sensors to avoid obstacles, a camera for visual processing, line sensors to follow marks on the floor, and voice processing to follow and respond to spoken commands. We will use ultrasonic distance sensors like Nao, and experiment with distance sensors a little like Asimo. We will also look at pan and tilt mechanisms for camera a little like the head used in Cog. The Mars rovers The Mars rover robots are designed to function on a different planet, where there is no chance of human intervention if something goes wrong. They are robust by design. New code can only be sent to a Mars rover via a remote connection as it is not practical to send up a person with a screen and keyboard. The Mars rover is headless by design. Refer to the following photo: ® 143 Chapter 3  Industrial and Household Robotics The Curiosity Mars rover by NASA Mars rovers depend on wheels instead of legs, since this is far simpler to make a robot stable, and there is far less that can go wrong. Each wheel on the Mars rovers has it’s own motor. They are arranged to provide maximum grip and stability to tackle the rocky terrain and reduced gravity on Mars. ® 144 Chapter 3  Industrial and Household Robotics The Curiosity rover was deposited on Mars with its sensitive camera folded up. After landing, the camera was unfolded and positioned with servo motors. The camera package can be positioned using a pan and tilt mechanism so it can take in as much of the Mars landscape as it can, sending back footage and pictures to NASA for analysis. Like the Mars robot, the robot we will build will use motor-driven wheels. Our robot will also be designed to run without a keyboard and mouse, being headless by design. As we expand the capabilities of our robot, we will also use servo motors to drive a pan and tilt mechanism. Robots in the home Many robots have already infiltrated our homes. They are overlooked as robots because on first glance they appear commonplace and mundane. However, they are more sophisticated than they seem. The washing machine Let’s start with the washing machine. This is used every day in some homes, with a constant stream of clothes to wash, spin, and dry. But how is this a robot? Let us understand this by referring to the following diagram: ® 145 Chapter 3  Industrial and Household Robotics The humble washing machine as a robot The preceding diagram represents a washing machine as a block diagram. There is a central controller connected to the display, and with controls to select a program. The lines going out of the controller are outputs, and the lines going into the controller are data coming in from sensors. The dashed lines from outputs to the sensors show a closed loop of output actions in the real world causing sensor changes; this is feedback, an essential concept in robotics. The washing machine uses the display and buttons to let the user choose the settings and see the status. After the start button is pressed, the machine will check the door sensor and sensibly refuse to start if the door is open. Once the door is closed and the start button is pressed, it will output to lock the door. After this, it uses heaters, valves, and pumps to fill the drum with heated water, using sensor feedback to regulate the water level and temperature. ® 146 Chapter 3  Industrial and Household Robotics This washing machine is in every respect a robot. A washing machine has sensors and outputs to affect its environment. Processing allows it to follow a program and use sensors with feedback to reach and maintain conditions. A washing machine repair person may be more of a roboticist than I. Other household robots A gas central heating boiler has sensors, pumps, and valves and uses feedback mechanisms to maintain the temperature of the house, water flow through heating, gas flow, and ensure that the pilot light stays lit. Smart fans use sensors to detect room temperature, humidity, and air quality, then output through the fan speed and heating elements. A computer printer is also a robot, with moving part outputs and sensors to detect all those pesky paper jams. Perhaps the most obvious home robot is the robot vacuum cleaner. Refer to the following diagram: A robotic vacuum cleaner (PicaBot By Handitec) This wheeled mobile robot is like the one we will build here, but prettier. They are packed with sensors to detect walls, bag levels, and barrier zones, and avoid collisions. They most represent the type of robot we are looking at. As we build our robot, we will explore how to use its sensors to detect things and react to them, forming the same feedback loops we saw in the washing machine. Robots in industry Another place robots are commonly seen is in industry. The first useful robots have been used in factories, and have been there for a long time. ® 147 Chapter 3  Industrial and Household Robotics Robot arms Robot arms range from very tiny and delicate robots for turning eggs, to colossal monsters moving shipping containers. Robot arms tend to use stepper and servo motors. An impressive current industrial arm robot is Baxter from Rethink Robotics: ® 148 Chapter 3  Industrial and Household Robotics The Rethink Robotics Baxter Robot Many robot arms are unsafe to work next to and could result in accidents. Not so with Baxter; it can sense a human and work around or pause for safety. In the preceding image, these sensors can be seen around the “head.” The arm sensors and soft joints also allow Baxter to sense and react to collisions. Baxter also has a training and repeat mechanism for workers to adapt it to work, using sensors in the joints to detect their position when being trained or playing back motions. Our robot will use encoder sensors so we can precisely program wheel movements. Warehouse robots Another common type of robot used in industry is those that move items around a factory floor or warehouse. There are giant robotic crane systems capable of shifting pallets in storage complexes. They receive instructions on where goods need to be moved from and to within shelving systems: ® 149 Chapter 3  Industrial and Household Robotics Intellicart Line Following Robot Smaller item-moving robot vehicles often employ line sensing technology, by following lines on the floor, wire underneath the floor via magnetic sensing, or marker beacons like ASIMO does. Our robot will follow lines like these. These line-following carts frequently use wheeled arrangements because these are simple to maintain and can form stable platforms. ® 150 Chapter 3  Industrial and Household Robotics Competitive, educational, and hobby robots The most fun robots can be those built by amateur robot builders. This is an extremely innovative space. Robotics always had a home in education, with academic builders using them for learning and experimentation platforms. Many commercial ventures have started in this setting. University robots tend to be group efforts, with access to increasingly hi- tech academic equipment to create them, as shown in the following picture: Kismet and OhBot Kismet was created at MIT in the late 90s. There are a number of hobbyist robots that are derived from it. It was groundbreaking at the time, using servo motors to drive face movements intended to mimic human expressions. This has been followed in the community with OhBot, an inexpensive hobbyist kit using servo motors, which can be linked with a Raspberry Pi, using voice recognition and facial camera processing to make a convincing display. Hobby robotics is strongly linked with open source and blogging, sharing designs, and code, leading to further ideas. Hobbyist robots can be created from kits available on the internet, with modifications and additions. The kits cover a wide range of complexity from simple three-wheeled bases to drone kits and hexapods. They come with or without the electronics included. ® 151 Chapter 3  Industrial and Household Robotics Spiderbot - built by me, based on a kit. Controller is an esp8266 + Adafruit 16 Servo Controller Skittlebot was my Pi Wars 2018 entry, built using toy hacking, repurposing a remote control excavator toy into a robot platform. Pi Wars is an autonomous robotics challenge for Raspberry Pi-based robots, which has both manual and autonomous challenges. There were entries with decorative cases and interesting engineering principles. Skittlebot uses three distance sensors to avoid walls. Here is a photo of Skittlebot: ® 152 Chapter 3  Industrial and Household Robotics Skittlebot - My PiWars 2018 Robot - based on a toy Some hobbyist robots are built from scratch, using 3D printing, laser cutting, vacuum forming, woodwork, CNC, and other techniques to construct the chassis and parts. Refer to the following set of photos: ® 153 Chapter 3  Industrial and Household Robotics Building Armbot I built the robot from scratch, for the London robotics group the Aurorans, in 2009. The robot was known as eeeBot in 2009, since it was intended to be driven by an Eee PC laptop. The Aurorans were a community who met to discuss robotics. The robot was later given a Raspberry Pi, and a robot arm kit seemed to fit it, earning it the name Armbot. In the current market, there are many chassis kits and a beginner will not need to measure and cut materials in this way to make a functioning robot. This was not built to compete, but to inspire other robot builders and kids to code. The television series Robot Wars is a well known competitive robot event with impressive construction and engineering skills. There is no autonomous behavior in Robot Wars though; these are all manually driven, like remote control cars. Washing machines, although less exciting, are smarter, so they could be more strictly considered robots. Advanced Robotics and AI The basic difference between what we will call an AI robot and a more normal robot is the ability of the robot and its software to make decisions, and learn and adapt to its environment based on data from its sensors. To be a bit more specific, we are leaving the world of deterministic behaviors behind. When we call a system deterministic, we mean that for a set of inputs, the robot will always produce the same output. If faced with the same situation, such as encountering an obstacle, then the robot will always do the same thing, such as go around the obstacle to the left. An AI robot, however, can ® 154 Chapter 3  Industrial and Household Robotics do two things the standard robot cannot: make decisions and learn from experience. The AI robot will change and adapt to circumstances, and may do something different each time a situation is encountered. It may try to push the obstacle out of the way, or make up a new route, or change goals. The basic principle of robotics and AI Artificial intelligence applied to robotics development requires a different set of skills from you, the robot designer or developer. You may have made robots before. You probably have a quadcopter or a 3D printer (which is, in fact, a robot). The familiar world of Proportional Integral Derivative (PID) controllers, sensor loops, and state machines must give way to artificial neural networks, expert systems, genetic algorithms, and searching path planners. We want a robot that does not just react to its environment as a reflex action, but has goals and intent—and can learn and adapt to the environment. We want to solve problems that would be intractable or impossible otherwise. What we are going to do is first provide some tools and background to match the infrastructure that was used to develop the examples. This is both to provide an even playing field and to not assume any knowledge on the reader’s part. We will use the Python programming language, the ROS for our data infrastructure, and be running under the Linux operating system. I developed the examples with Oracle’s VirtualBox software running Ubuntu Linux in a virtual machine on a Windows Vista computer. Our robot hardware will be a Raspberry Pi 3 as the robot’s on-board brain, and an Arduino Mega2560 as the hardware interface microcontroller. What is AI (and what is it not)? What would be a definition of AI? In general, it means a machine that exhibits some characteristics of intelligence—thinking, reasoning, planning, learning, and adapting. It can also mean a software program that can simulate thinking or reasoning. Let’s try some examples: a robot that avoids obstacles by simple rules (if the obstacle is to the right, go left) is not an AI. A program that learns by example to recognize a cat in a video, is an AI. A mechanical arm that is operated by a joystick is not AI, but a robot arm that adapts to different objects in order to pick them up is AI. There are two defining characteristics of artificial intelligence robots that you must be aware of. First of all, AI robots learn and adapt to their environments, which means that they change behaviors over time. The second characteristic is emergent behavior, where the robot exhibits developing actions that we did not program into it explicitly. We are giving the robot controlling software that is inherently non-linear and self- organizing. The robot may suddenly exhibit some bizarre or unusual reaction to an event or situation that seems to be odd, or quirky, or even emotional. I worked with a self-driving car that we swore had delicate sensibilities and moved very daintily, earning it the nickname Ferdinand after the sensitive, flower loving bull from the cartoon, which was appropriate in a nine-ton truck that appeared to like plants. These behaviors are just caused by interactions of the various software components and control algorithms, and do not represent anything more than that. One concept you will hear around AI circles is the Turing test. The Turing test was proposed by Alan Turing in 1950, in a paper entitled Computing Machinery and ® 155 Chapter 3  Industrial and Household Robotics Intelligence. He postulated that a human interrogator would question an hidden, unseen AI system, along with another human. If the human posing the questions was unable to tell which person was the computer and which the human was, then that AI computer would pass the test. This test supposes that the AI would be fully capable of listening to a conversation, understanding the content, and giving the same sort of answers a person will. I don’t believe that AI has progressed to this point yet, but chat bots and automated answering services have done a good job of making you believe that you are talking to a human and not a robot. Our objective is not to pass the Turing test, but rather to take some novel approaches to solving problems using techniques in machine learning, planning, goal seeking, pattern recognition, grouping, and clustering. Many of these problems would be very difficult to solve any other way. A software AI that could pass the Turing test would be an example of a general artificial intelligence, or a full, working intelligent artificial brain, and just like you, a general AI does not need to be specifically trained to solve any particular problem. To date, a general AI has not been created, but what we do have is narrow AI, or software that simulates thinking in a very narrow application, such as recognizing objects, or picking good stocks to buy. What we are not building is a general AI, and we are not going to be worried about our creations developing a mind of their own or getting out of control. That comes from the realm of science fiction and bad movies, rather than the reality of computers today. I am firmly of the mind that anyone preaching about the evils of AI or predicting that robots will take over the world has not worked or practiced in this area, and has not seen the dismal state of AI research in respect of solving general problems or creating anything resembling an actual intelligence. There is nothing new under the sun Most of AI as practiced today is not new. Most of these techniques were developed in the 1960s and 1970s and fell out of favor because the computing machinery of the day was insufficient for the complexity of the software or number of calculations required, and only waited for computers to get bigger, and for another very significant event – the invention of the internet. In previous decades, if you needed 10,000 digitized pictures of cats to compile a database to train a neural network, the task would be almost impossible. Today, a Google search for cat pictures returns 126,000,000 results in 0.44 seconds. Finding cat pictures, or anything else, is just a search away, and you have your training set for your neural network—unless you need to train on a very specific set of objects that don’t happen to be on the internet, in which case we will once again be taking a lot of pictures with another modern aid not found in the 1960s, a digital camera. The happy combination of very fast computers; cheap, plentiful storage; and access to almost unlimited data of every sort has produced a renaissance in AI. Another modern development has occurred on the other end of the computer spectrum. While anyone can now have a supercomputer on their desk at home, the development of the smartphone has driven a whole series of innovations that are just being felt in technology. Your wonder of a smartphone has accelerometers and gyroscopes made of tiny silicon chips called microelectromechanical systems (MEMS). It also has a high resolution but very small digital camera, and a multi-core computer processor that takes very little power to run. It also contains (probably) three radios: a WiFi wireless network, a cellular phone, and a Bluetooth transceiver. As good as these parts are at ® 156 Chapter 3  Industrial and Household Robotics making your iPhone™ fun to use, they have also found their way into parts available for robots. That is fun for us because what used to be only available for research labs and universities are now for sale to individual users. If you happen to have a university or research lab, or work for a technology company with multi-million dollar development budgets, you will also learn something, and find tools and ideas that hopefully will inspire your robotics creations or power new products with exciting capabilities. What is a robot? A robot is a machine that is capable of sensing and reacting to its environment, and that has some human or animal-like function. We generally think of a robot as some sort of automated, self- directing mobile machine that can interact with the environment. What is a robot? A robot is a machine that is capable of sensing and reacting to its environment, and that has some human or animal-like function. We generally think of a robot as some sort of automated, self- directing mobile machine that can interact with the environment. The example problem – clean up this room! We will be using AI and robotics techniques to pick up toys in my upstairs game room after my grandchildren have visited. That sound you just heard was the gasp from the professional robotics engineers and researchers in the audience. This problem is a close analog to the problem Amazon has in picking items off of shelves and putting them in a box to send to you. For the last several years, Amazon has sponsored the Amazon Robotics Challenge where they invited teams to try and pick items off shelves and put them into a box for cash prizes. They thought the program difficult enough to invite teams from around the world. The contest was won in 2017 by a team from Australia. Robotics designers first start with the environment – where does the robot work? We divide environments into two categories: structured and unstructured. A structured environment, such as the playing field for a first robotics competition, an assembly line, or lab bench, has everything in an organized space. You have heard the saying A place for everything and everything in its place—that is a structured environment. Another way to think about it, is that we know in advance where everything is or is going to be. We know what color things are, where they are placed in space, and what shape they are. A name for this type of information is a prior knowledge – things we know in advance. Having advanced knowledge of the environment in robotics is sometimes absolutely essential. Assembly line robots are expecting parts to arrive in exactly the position and orientation to be grasped and placed into position. In other words, we have arranged the world to suit the robot. In the world of our game room, this is simply not an option. If I could get my grandchildren to put their toys in exactly the same spot each time, then we would not need a robot for this task. We have a set of objects that is fairly fixed – we only have so many toys for them to play with. We occasionally add things or lose toys, or something falls down the stairs, but the toys are a elements of a set of fixed objects. What they are not is positioned or oriented in any particular manner – they are just where they were left when the kids finished playing with them and went home. We also have a fixed set of furniture, but some parts move – the footstool or chairs can be moved around. This ® 157 Chapter 3  Industrial and Household Robotics is an unstructured environment, where the robot and the software have to adapt, not the toys or furniture. ® 158 Chapter 3  Industrial and Household Robotics The problem is to have the robot drive around the room, and pick up toys. Let’s break this task down into a series of steps: 1. We want the user to interact with the robot by talking to it. We want the robot to understand what we want it to do, which is to say, what our intent is for the commands we are giving it. 2. Once commanded to start, the robot will have to identify an object as being a toy, and not a wall, a piece of furniture, or a door. 3. The robot must avoid hazards, the most important being the stairs going down to the first floor. Robots have a particular problem with negative obstacles (dropoffs, curbs, cliffs, stairs, and so on), and that is exactly what we have here. 4. Once the robot finds a toy, it has to determine how to pick the toy up with its robot arm. Can it grasp the object directly, or must it scoop the item up, or push it along? We expect that the robot will try different ways to pick up toys and may need several trial and error attempts. 5. Once the toy is acquired by the robot arm, the robot needs to carry the toy to a toy box. The robot must recognize the toy box in the room, remember where it is for repeat trips, and then position itself to place the toy in the box. Again, more than one attempt may be required. 6. After the toy is dropped off, the robot returns to patrolling the room looking for more toys. At some point, hopefully, all of the toys are retrieved. It may have to ask us, the human, if the room is acceptable, or if it needs to continue cleaning. What will we be learning from this problem? We will be using this backdrop to examine a variety of AI techniques and tools. It is the process and the approach that is the critical information here, not the problem and not the robot I developed so that we have something to take pictures. We will be demonstrating techniques for making a moving machine that can learn and adapt to its environment. What you will learn Building a firm foundation for robot control by understanding control theory and timing. We will be using a soft real-time control scheme with what I call a frame-based control loop. This technique has a fancy technical name – rate monotonic scheduling— but I think you will find the concept fairly intuitive and easy to understand. At the most basic level, AI is a way for the robot to make decisions about its actions. We will introduce a model for decision making that comes from the US Air Force, called the OODA (Observe- Orient-Decide- Act) loop. Our robot will have two of these loops: an inner loop or introspective loop, and an outward looking environment sensor loop. The lower, inner loop takes priority over the slower, outer loop, just as the autonomic parts of your body (heartbeat, breathing, eating) take precedence over your task functions (going to work, paying bills, mowing the lawn). ® 159 Chapter 3  Industrial and Household Robotics The OODA Loop The OODA loop was invented by Col. John Boyd, a man also called The Father of the F-16. Col. Boyd’s ideas are still widely quoted today, and his OODA loop is used to describe robot artificial intelligence, military planning, or marketing strategies with equal utility. The OODA provides a model for how a thinking machine that interacts with its environment might work. Our robot works not by simply doing commands or following instructions step by step, but by setting goals and then working to achieve these goals. The robot is free to set its own path or determine how to get to its goal. We will tell the robot to pick up that toy and the robot will decide which toy, how to get in range, and how to pick up the toy. If we, the human robot owner, instead tried to treat the robot as a teleoperated hand, we would have to give the robot many individual instructions, such as move forward, move right, extend arm, open hand, each individually and without giving the robot any idea of why we were making those motions. Before designing the specifics of our robot and its software, we have to match its capabilities to the environment and the problem it must solve. We will use two tools from the discipline of systems engineering to accomplish this – use cases and storyboards. I will make this process as streamlined as possible. More advanced types of systems engineering are used by NASA and aerospace companies to design rockets and aircraft – this gives you a taste of those types of structured processes. Artificial intelligence and advanced robotics techniques We start with object recognition. We need our robot to recognize objects, and then classify them as either toys to be picked up or not toys to be left alone. We will use a trained artificial neural network (ANN) to recognize objects from a video camera from various angles and lighting conditions. ® 160 Chapter 3  Industrial and Household Robotics The next task, once a toy is identified, is to pick it up. Writing a general purpose pick up anything program for a robot arm is a difficult task involving a lot of higher mathematics (google inverse kinematics to see what I mean). What if we let the robot sort this out for itself? We use genetic algorithms that permit the robot to invent its own behaviors and learn to use its arm on its own. Our robot needs to understand commands and instructions from its owner (us). We use natural language processing to not just recognize speech, but understand intent for the robot to create goals consistent to what we want it to do. We use a neat technique ® 161 Chapter 3  Industrial and Household Robotics called the “fill in the blank” method to allow the robot to reason from the context of a command. This process is useful for a lot of robot planning tasks. The robot’s next problem is avoiding the stairs and other hazards. We will use operant conditioning to have the robot learn through positive and negative reinforcement where it is safe to move. The robot will need to be able to find the toy box to put items away, as well as have a general framework for planning moves into the future. We will use decision trees for path planning, as well as discuss pruning for quickly rejecting bad plans. We will also introduce forward and backwards chaining as a means to quickly plan to reach a goal. If you imagine what a computer chess program algorithm must do, looking several moves ahead and scoring good moves versus bad moves before selecting a strategy, that will give you an idea of the power of this technique. This type of decision tree has many uses and can handle many dimensions of strategies. We’ll be using it to find a path to our toy box to put toys away. I have four wonderful, talented, and delightful grandchildren who love to come and visit. The oldest grandson is six years old, and autistic, as is my grandaughter, the third child. I introduced the grandson, William, to the robot , and he immediately wanted to have a conversation with it. He asked What’s your name? and What do you do? He was disappointed when the robot made no reply. So for the grandkids, we will be developing an engine for the robot to carry on a small conversation. We will be creating a robot personality to interact with children. William had one more request of this robot: he wants it to tell and respond to knock, knock jokes. While developing a robot with actual feelings is far beyond the state of the art in robotics or AI today, we can simulate having a personality with a finite state machine and some Monte-Carlo modeling. We will also give the robot a model for human interaction so that the robot will take into account the child’s mood as well. I like to call this type of software an artificial personality to distinguish it from our artificial intelligence. AI builds a model of thinking, and AP builds a model of emotion for our robot. Introducing the robot and our development environment As shown in the photo, our robot has tracks, a mechanical six degree-of-freedom arm, and a computer. Let’s call him TinMan, since, like the storybook character in The Wizard of Oz, he has a metal body and all he wants for is a brain. ® 162 Chapter 3  Industrial and Household Robotics ® 163 Chapter 3  Industrial and Household Robotics Our tasks in center around picking up toys in an interior space, so our robot has a solid base with two motors and tracks for driving over a carpet. Our steering method is the tank-type, or differential drive where we steer by sending different commands to the track motors. If we want to go straight ahead, we set both motors to the same forward speed. If we want to travel backward, we reverse both motors the same amount. Turns are accomplished by moving one motor forward and the other backward (which makes the robot turn in place) or by giving one motor more forward drive than the other. We can make any sort of turn this way. In order to pick up toys we need some sort of manipulator, so I’ve included a six-axis robot arm that imitates a shoulder – elbow – wrist- hand combination that is quite dexterous, and since it is made out of standard digital servos, quite easy to wire and program. You will note that the entire robot runs on one battery. You may want to split that and have a separate battery for the computer and the motors. This is a common practice, and many of my robots have had separate power for each. Make sure if you do to connect the ground wires of the two systems together. I’ve tested my power supply carefully and have not had problems with temperature or noise, although I don’t run the arm and drive motors at the same time. If you have noise from the motors upsetting ® 164 Chapter 3  Industrial and Household Robotics the Arduino (and you will tell because the Arduino will keep resetting itself), you can add a small filter capacitor of 10 µf across the motor wires. The main control of the TinMan robot is the Raspberry Pi 3 single board computer (SBC), that talks to the operator via a built-in Wi-Fi network. An Arduino Mega 2560 controller based on the Atmel architecture provides the interface to the robot’s hardware components, such as motors and sensors. You can refer to the preceding diagram on the internal components of the robot. We will be primarily concerned with the Raspberry Pi3 single board computer (SBC), which is the brains of our robot. The Raspberry Pi 3 acts as the main interface between our control station, which is a PC running Linux in a virtual machine, and the robot itself via a Wi-Fi network. Just about any low power, Linux-based SBC can perform this job, such as a BeagleBone Black, Oodroid XU4, or an Intel Edison. ® 165 Chapter 3  Industrial and Household Robotics Connected to the SBC is an Arduino 2560 Mega microcontroller board that will serve as our hardware interface. We can do much of the hardware interface with the PI if we so desired, but by separating out the Arduino we don’t have to worry about the advanced AI software running in the Pi 3 disrupting the timing of sending PWM (pulse width modulated) controls to the motors, or the PPM (pulse position modulation) signals that control our six servos in the robot arm. Since our motors draw more current than the Arduino can handle itself, we need a motor controller to amplify our commands into enough power to move the robot’s tracks. The servos are plugged directly into the Arduino, but have their own connection to the robot’s power supply. We also need a 5v regulator to provide the proper power from the 11.1v rechargeable lithium battery power pack into the robot. My power pack is a rechargeable 3S1P (three cells in series and one in parallel) 2,700 ah battery normally used for quadcopter drones, and came with the appropriate charger. As with any lithium battery, follow all of the directions that came with the battery pack and recharge it in a metal box or container in case of fire. ® 166

Use Quizgecko on...
Browser
Browser