CogSci_17Robotics.pdf
Document Details
Uploaded by BrightestBoston2440
Tags
Full Transcript
Robotics I. Classical AI (GOFAI) A. Shakey B. Limitations II. Situated cognition A. Dynamical systems 1. Stepping reflex 2. Walking 3. Dynamical systems and chaos theory B. Biorobotics and morphological computation C. Subsumption architectures 1. Reflexive responses 2. Layers built up from behaviors...
Robotics I. Classical AI (GOFAI) A. Shakey B. Limitations II. Situated cognition A. Dynamical systems 1. Stepping reflex 2. Walking 3. Dynamical systems and chaos theory B. Biorobotics and morphological computation C. Subsumption architectures 1. Reflexive responses 2. Layers built up from behaviors D. Situated vs embodied cognition III. Human brain project IV. Living robots: xenobots V. The uncanny valley effect GOFAI Robotics GOFAI (Good Old-Fashioned Artificial Intelligence) Robotics ØSHAKEY − Early robot developed by Stanford Research Institute (c. 1970) − Called SHAKEY because of its jerky movements − First robot that was able to move around, perceive, follow instructions, and implement complex instructions in a realistic environment (as opposed to just in a virtual micro-world like SHRDLU) − Software that allowed SHAKEY to operate was run on a separate computer system that communicated with SHAKEY via a radio antenna − Programs permit SHAKEY to o Plan ahead o Learn how to perform tasks better The physical environment in which SHAKEY operated was a suite of rooms (overall about 40 ft x 60 ft) that were empty except for some boxes that SHAKEY could move around SHAKEY was the first robot that used a layered architecture: complex behaviors are hierarchically organized Low-level actions (LLAs) are SHAKEY’s basic behaviors, e.g., rolling forward or backward, taking photos with its onboard camera, moving its head Intermediate-level actions (ILAs) − Chain LLAs − Could recruit other ILAs Ø Ex: GETTO action routine calls upon the NAVTO routine for navigating around in the current room, as well as the GOTOROOM routine STRIPS planner (Stanford Research Institute Problem Solver) − Similar to Newell and Simon’s General Problem Solver (means-end analysis) − Translates a particular goal, e.g., fetching a block from an adjacent room, into a sequence of ILAs PLANEX monitors the execution of the plan Ø Ex: Calculates the degree of error at a certain stage of executing a plan, on the assumption that each ILA would introduce a degree of “noise” − When the degree of error reaches a certain threshold, PLANEX instructs SHAKEY to take a photo to check its position Main objection to traditional GOFAI approach to artificial agents, like SHAKEY, is that the robot is not embedded in a real-life environment and can never really come to terms with real-life problems and challenges Can only operate in a highly constrained environment Cannot learn to solve problems – all solutions to problems are built in Ø Ex: Cannot look at a photo it has not seen before of a person in a room with a banana just out of reach, and suggest a plan of action − Even a young child can solve the problem, but classical AI cannot ➜ To address these issues, situated cognition theorists propose a dynamical systems-like approach to robotics, where behaviors emerge out of complex interactions between an organism and its environment Dynamical Systems Ø To understand how dynamical systems function, let’s start with an example: the stepping reflex − In the first few months of life, infants are able to make stepping movements − They stop making these movement during the “non-stepping window” − The movements reappear when the infant starts walking at around 11 months of age Traditional explanation for U-shaped developmental trajectory of stepping: Infant’s initial stepping movements are purely reflexive They disappear during the non-stepping window because the cortex has matured enough to inhibit reflex responses – but is not sufficiently mature to bring stepping movements under voluntary control Studies by Esther Thelen and Linda Smith challenged this view Their research indicated that stepping movements could be artificially induced or inhibited in infants by manipulating features of the environment − Infants in the non-stepping window will make stepping movements o When they are suspended in warm water o When they are placed on a treadmill − On the other hand, stepping movements can be inhibited before the start of the non-stepping window by attaching small weights to baby’s ankle ➜ Conclusion: Stepping movements vary independently of how the cortex has developed They depend on a number of parameters, such as leg fat, muscle strength, gravity, and inertia ☞ In other words, walking does not involve a specific set of motor commands that “program” the limbs to behave in certain ways (top-down) Rather, the activity of walking emerges out of complex interactions between muscles, limbs, and different features of the environment (bottom-up) Dynamical models Are used to understand how agents are embedded in their environments Use calculus-based methods to track the evolving relationship between a small number of variables over time Dynamical systems are Complex Self-organizing Emergent Nonlinear Ø Ex: when our brain issues a command to move our hand to grasp an object, the perception we receive of our initial movement will feed back and alter our subsequent movements, allowing us to fine-tune our movement Appear non-predictable and “chaotic” ✦ We tend to perceive highly nonlinear dynamics of complex systems as “chaotic” − In a linear system, changing an input by multiplying it by a specific amount will yield a directly proportional change – an output that also increases by that amount − In a nonlinear system, even a tiny or infinitesimal change in the input may result in an output that’s wildly different o That is, it’s exceptionally “sensitive” to even the tiniest changes in input ➜ Because we cannot track those changes, to us, the nonlinear system seems crazy, totally chaotic, even though it may in fact be a fully lawlike deterministic system − In addition, what looks to us like the “same” inputs may actually be slightly different, resulting in dramatically different results that appear chaotic ☞ This has been dubbed the butterfly effect: “Does the flap of a butterfly’s wings in Brazil cause a hurricane in Texas?” ➜ ★ This sort of “sensitivity on initial conditions” is why weather prediction is so difficult Situated Cognition Situated cognition theorists Propose a dynamical systems-like approach to robotics Believe that we should start small and focus on basic ecologically valid problems Ø For instance, studying insects can allow us to better understand how organisms interact with their environment − Insects achieve high degrees of “natural intelligence” by exploiting direct connections between their sensory receptors and effector limbs Ex: female crickets are extremely good at recognizing and locating mates on the basis of the song they make (phonotaxis) o Seems like program to do this would be very complex – one would need to identify the sound, work out where it comes from, then form motor commands that will take the cricket to the right place o However, turns out crickets are simply hard-wired to move in the direction of the ear with the highest vibration (provided that the vibration is suitably cricket-like) o There is no “direction-calculating mechanism,” no “male cricket identification mechanism,” and no “motor controller” Barbara Webb has used this model to build robot crickets that can identify the source of a sound and move automatically toward that source without any of the systems normally assumed by GOFAI This is an instance of biorobotics: using knowledge of living insects, as well as AI, to create agents capable of moving about and solving problems in their environment It is also an example of morphological computation: exploiting features of body shape to simplify what might otherwise be highly complex information-processing tasks Applying the idea of morphological computation to robotics means building as much of the computation as possible directly into the physical structure of the robot Ø Yokoi hand: − Using a traditional computational approach, grasping an object (e.g., a glass) requires computing an object’s shape and configuring the hand to conform to that shape − The Yokoi hand is instead constructed from elastic and deformable materials that allow the hand to adapt itself to the shape of the objects being grasped Subsumption Architectures Webb’s robot crickets and the Yokoi hand are examples of what Rodney Brooks calls subsumption architectures These robots do not operate by executing algorithms to map their surrounding, etc., but rather with a set of relatively simple stimulus-response mechanisms Their intelligence aggregates from the bottom up, rather than being organized explicitly from the top down Based on idea that intelligence – and consequently, performance of efficacious action – does not require formal symbolic representation Robots constructed with this architecture make reflexive responses to environmental stimuli − Representations exist as production rules (if-then statements) or reflexes that map a stimulus onto a behavior Knowledge does not exist in isolated representations; rather, knowledge is embodied Subsumption architectures are made up of layers that are built up from behaviors Ø Ex: Obstacle avoidance layer Directly connects perception (sensing an obstacle) to action (either swerving to avoid the obstacle, or halting when the obstacle is too big to go around) Whatever other layers are built into the subsumption architecture, the obstacleavoidance layer is always online and functioning – For instance, there may be a “higher” layer that directs the robot toward a food source, but the obstacle-avoidance layer will still come into play whenever the robot finds itself on a collision course with an obstacle Rodney Brook’s robot Allen Basic layer is obstacle-avoidance layer Over time, more and more layers were added, mimicking how evolution works Semi-autonomous subsystems operate relatively independently of each other, though some subsystems can override others There is no central “controller” comparable to PLANEX in SHAKEY maintaining a continuously updated model of the world and its own state Direct perception-action links allows robot to deliver immediate motor responses to sensory input Can be argued that subsumption architecture robots do not constitute intelligent agents because they do not really involve decision-making processes One response to this challenge is hybrid architectures that have Subsumption architecture for low-level reactive control (“scaled-up insects”) WITH Traditional central planner for high-level decision-making (“scaled-down supercomputers”) grafted onto them Another way of meeting this challenge is behavior-based robots Unlike subsumption architectures, behavior-based ones represent their environments and use those representations in planning actions However, unlike symbolic architectures, there is no central planning system Ex: TOTO robot that can identify shortest route between previously visited landmarks − It uses a topological, rather than a metric map, that simply contains information as to whether two landmarks are connected, but not as to how far apart they are, and selects the path that goes via the smallest number of landmarks − Bees operate in similar fashion to identify shortcuts between feeding sites Situated Versus Embodied Cognition Brooks also differentiates between robots that are situated and those that are embodied A situated creature is one that is “embedded in the world, and which does not deal with abstract descriptions, but through its sensors with the here and now” An embodied creature is “one that has a physical body and experiences the world directly through the influence of the world on that body” Ø Airline reservation system is situated but not embodied Ø An assembly line robot that spray paints parts in an automobile manufacturing plant is embodied but not situated – it doesn’t interact dynamically or adaptively with the environment Human Brain Project Human Brain Project sponsored by European Commission aims to Simulate the brain; generate digital reconstructions and simulations of the mouse brain and ultimately the human brain Implement models of the brain in neuromorphic computing and neurorobotic systems Develop a model of the brain that merges theory (top-down) and data-driven (bottom-up) approaches for understanding learning, memory, attention, and goal-oriented behaviors Develop tools to explore new diagnostic indicators and drug targets Living Robots Xenobot: Small (< 1mm) biological machine Created by scientists at the University of Vermont and Tufts University (Kriegman, Blackiston, Levin, and Bongard, 2020) Built from the ground up using biological cells − Made of skin cells and heart cells in the form of stem cells harvested from frog embryos Designed and programmed by a supercomputer using an evolutionary algorithm − A few hundred simulated cells were reassembled into myriad forms and body shapes − The most successful simulated organisms were kept and refined Single stem cells were then cut and joined using tiny forceps and an electrode into close approximation of designs specified by computer Cells began to work together: − Skin cells formed more passive architecture while the once-random contractions of heart muscle cells created ordered forward motion as guided by the computer design and aided by spontaneous self-organizing patterns ✧ Xenobots are able to move in a coherent fashion to explore their watery environment and can survive for days or weeks, powered by embryonic energy stores ✧ Functions − Groups of xenobots can move around in circles, pushing pellets into a central location – spontaneously and collectively − Others were built with a hole through the center and were able to use that as a pouch to successfully carry an object − When xenobot was cut in half, it stitched itself back up and kept going ✧ Potential applications − Intelligent drug delivery: carrying medicine to a specific place in body − Traveling in arteries to scrape out plaque − Searching out and break down harmful compounds or radioactive wastes − Gathering microplastics in the oceans − Serving as new material for technologies that is fully biodegradable, unlike steel, concrete, or plastic that can cause ecological and health problems − Helping to develop greater understanding of how complex behaviors emerge from simple cells The Uncanny Valley Effect One limitation of robotics research at present is the uncanny valley effect, which poses a problem for the creation of avatars that are a little too life-like Uncanny valley effect: humanoid objects that imperfectly resemble actual human beings provoke uncanny feelings of uneasiness and revulsion in observers As the appearance of a robot is made more human, observers’ emotional response to the robot generally becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion Many different explanations have been proposed for this effect One general explanation is that we are constantly evaluating whether people are trustworthy If an entity looks almost but not quite human, that immediately sets of big alarm bells Video References Videos excerpted from: Shakey the Robot The First Robot to Embody Artificial Intelligence https://www.youtube.com/watch?v=7bsEN8mwUB8 Newborn Reflexes https://www.youtube.com/watch?v=_JVINnp7NZ0 Dynamic Systems Theory - Texas State University https://www.youtube.com/watch?v=4t2ww3gfKrg This is the First LIVING Robot and it's Unbelievable https://www.youtube.com/watch?v=js6uTRT8KO4