Notes 2023 PDF
Document Details
Uploaded by Deleted User
2023
Karolina Kozikowska
Tags
Summary
These are notes from 2023, focused on the topic of robotics and artificial intelligence. The notes cover basic definitions, exploring different robot types, and discussing control theory and cybernetics.
Full Transcript
Notes Karolina Kozikowska Group 7 2023 TOPIC 1: Definition of a robot -> A robot is an autonomous system which exists in the physical world, can sense its environment, and can act ……………………………….on it to achieve some goals. Autonomous -> acts on a basis of its own decisions, not cont...
Notes Karolina Kozikowska Group 7 2023 TOPIC 1: Definition of a robot -> A robot is an autonomous system which exists in the physical world, can sense its environment, and can act ……………………………….on it to achieve some goals. Autonomous -> acts on a basis of its own decisions, not controlled by humans Teleoperated -> externally controlled by humans To be embodied -> to exist in a physical world -> capable of maintaining itself in the outside world -> adapting and reacting differently to the changing environment: - can include learning mechanisms To be situated in a physical world -> to be able to exist and sense the world Robots and sensing -> if a system does not sense but is magically given information, we do not consider it a true robot -> if a system doesn’t sense/get information, then it is not a robot, as it can’t respond to what goes on around it. Control theory -> mathematical study of the properties of the automated system control systems -> one of the foundations of engineering Cybernetics -> study and comparison of communication and control processes in biological and artificial systems -> combined theories and principles from neuroscience and biology with those from engineering, with the goal of finding …………………….common properties and principles in animals and machines -> machines would use a similar steersman to produce sophisticated behavior similar to that found in nature -> focuses on the coupling, combining, and interaction between the mechanism or organism and its environment Biomimetic -> imitating biological systems in some way Tortoises -> contruction —> 2 sensors (light & tactile) —> 2 motors (steering & forward-backward motion) -> dubbed Elmer & Elsie -> programmed reactive control —> rotate light sensor, inhibited by light intensity —> move somewhere, if bumped, change direction …………….-> Observed behaviour —> if no light, robot turns (searching light) —> if light, the turning stops (approach light) —> if bump, changes path (obstacle avoidance) —> known as emergent behavior: the sequence of actions (patterns) is complex, and not programmed in directly (i.e. not hard-….. …………………………………………………. coded), and hard to predict from the governing control scheme. Shakey -> early AI - inspired robot -> construction —> contact sensors —> camera -> mechanism —> focuses on reasoning and planning —> works only in carefully prepared lab conditions —> Simplification of recognition —> slow operations compared to tortoises —> looks less natural than Braitenberg vehicles -> conclusion is that complexity of robot design ≠ complexity of behavior -> similar robots: CART, HILARE, CMU Rover Bots -> simulated robots -> no hardware problems -> perfect sensing and actuating (no need for odometry) -> simple communication -> open up the possibilities of > using many bots (agents) > studying group (collective) behavior - boids: — close → don’t collide — middle → align heading — far → move to group — many agents -> produces believable herd behavior -> no designated leader -> starting point for crowd simulations types of connections between the sensors and motors: — excitatory connection -> the stronger the sensory input, the stronger the motor output — inhibitory connection -> the stronger the sensory input, the weaker the motor output Braitenberg vehicles -> 14 vehicles are constructed, each adds more “brain power” -> displaying behaviors ranging from simple to complex. -> understanding intelligence by creating “intelligence” —>synthetic approach to creating brains displaying emergent behaviors ………………………….… —>demonstrating the interplay between brain, environment, and resulting …………………………………...behavior -> sensors were directly connected to the motors, so that the sensory ……………………………….input could drive the motor output ………………………. -> types: — 1-4: > purely reactive systems > no learning or memory of past events is involved > limited level of autonomy > the behavior is complex and hard to predict — 7-14: > higher explicit capabilities, such as — learning — (visual) shape detection — motion detection — short-term memory — predictive planning Components of a robot: -> sensors — provide information about the world and the robot itself — define the sensory or perceptual space of the robot and allow it to know its state —> discrete —> continuous —> observable —> partially observable —> hidden. -> effectors & actuators — provide ability to take actions (locomotion or manipulation) -> controllers — provide partial/complete autonomy TOPIC 2: learning -> the ability to acquire new knowledge or skills and improve one’s performance a robot can learn about itself, its environment, other robots reasons why robots should learn Tasks: the preset way may be improved upon Environment: a designer cannot oversee all consequences of a changing environment Efficiency: it can be more difficult to program and tune a parameter, rather than learning forms of learning: — reinforcement - based on psychology, control theory, neoroscience, artificial neural networks - learning happens through interaction with the environment and achieving specific goals - animals and humans learn that way - versatile learning tool - used to learn state-action combinations: control policy -> complete state-action table -> temporal differencing -> Q-learning value function —> value of being in each state relative to the goal - trial & error: positive feedback - reward negative feedback - punishment temporal credit assignment - the general problem of assigning the credit or blame to actions taken over time - works best when the outcome is clear & the problem is not too huge - mapping situations to “best” actions to get closer to the goal - through optimizing a reward signal (which can be supervised or unsupervised) - exploration -> process of trying all possible state-action combinations - exploitation -> process of using what has been learned - exploration vs. exploitation -> trade-off between constantly learning (at the cost of doing things less than perfectly) and ………………. using what is known to work well (at the cost of missing out on further improvements) - how exactly they learn -> the robot has a table that has all possible states as rows, and all possible actions as columns, …. after the robot tries a particular state-action combination and sees what happens, it updates …. that entry in the table - multiple robots situations -> spatial credit assignment - robot may do an action in a state that does nothing, but if another robot happens to do just the right next thing next to it, either may assume the desirable outcome is due to itself and not the other - optimising a reward signal can be supervised or unsupervised - elements — environment — agent - temporally situated in the environment & has a goal to affect the environment via actions — state - representation of the local environment as perceived by the agent — reward - the immediate reward for being in a state — value - how good is the current state, taking into account possible future rewards — policy - which action do we take given a certain state? - problems -> The utility of actions also depend on the environment -> Possibility of delayed reward -> Short-term losses versus long-term gains -> Exploration versus exploitation (trial-and-error) forgetting might actually be necessary -> Making room for new information -> Replacing old information that is no longer correct What to forget? -> Some old knowledge/skill might be incompatible with new experiences -> With incremental learning, agents tend to forget older experiences -> Problem: old isn’t always irrelevant Lifelong learning The idea is a balance between exploitation and exploration, even after deploying a robot Can result in robots that keep improving constantly but will also produce worse or undesirable behavior — unsupervised - learning is done from the inputs alone, for example clustering of data - usually similarity or a distance measure is optimized as a form of feedback - parameters: the trainable weights of the neural network - hyperparameters: the settings that describe the neural network shape and size as well as other learning settings. > in these notes, k – the number of clusters > amount of layers, activation functions, learning rate, … - clustering -> grouping similar data together -> teacher not necessary -> still hyperparameters to tune (k-means) - autoencoding -> reducing the dimensionality of data by learning to compress to (encode) a smaller latent representation and ……. decompress from (decode) a smaller latent representation: > latent space can be used for — supervised learning — clustering — visualisation: >> mnist handwritten digit data set >> 2D projection of VAE latent space — supervised - an external supervisor or teacher telling how to perform the action - feedback is the error (difference) between the produced output action and desired target action - neural networks use it to have the neural network output closely match the target value: they learn by updating the weights between nodes - backpropagation — given an input-target pair, a training step consists of: › forward propagation -> calculate the activation values for nodes in the first layer (inputs) -> apply the activation function -> calculate the activation values for the next layer -> repeat until the network output is determined › backward propagation -> determine the error (difference between output and target/teacher value) -> calculate the contribution to the error from each node in the previous layer -> update the weights relative to that contribution (delta rule) -> propagate that relative error to the previous layer -> repeat until the input weights are adjusted - ALVINN — robot with neural network — Objective: autonomous lane following — Input: image of the road ahead — Output: steering direction › Step by step Record human driving angles and camera view ALVINN shows how it would steer, given an image The human steering angle is presented ALVINN computes the difference (output error) ALVINN updates its weight values w and v — learning from demonstration: - to make it work a robot has to -> Pay attention to a demonstration. -> Separate what is relevant to the task being taught from all irrelevant information. -> Match the observed behavior to its own behaviors and effectors, taking care ofabout …..reference frames. -> Adjust the parameters so the imitation makes sense and looks good. -> Recognize and achieve the goals of the performed behavior. - can reduce the learning effort - effective but hard to implement - Examples of good behavior is a good starting point - If the state-action space is very large (or continuous) it reduces learning time to a working policy but may make it difficult to ……..find an optimal solution - challenging to make the robot remember what it experienced during trying (or observed in case of imitation) & how it can ……. generate that behavior again - putting through -> learner experiencing the task directly TOPIC 3 — created in the 60s, used in 80s (Back then they could search very complex hypothesis spaces & they can be easily ……. ……parallelized, therefore they can take advantage of powerful computer hardware), evolved to GP in 90s, not used nowadays — categorised in -> Global Search heuristics - do not search for general->specific hypothesis -> search techniques — approach to learning based on simulated evolution (immitating natural evolution) — constantly mutate & recombine parts of knows solutions to create new ones — process -> get a problem, make a list of possible solutions, evaluate and rank solutions based on how good they are, make …………………l..best solutions interact until one of the new ones is a good solution to the problem/stopping criteria is met — components: learning problem (we want to solve it) fitness function f(x) (definining the problem) initial pool of solutions evolution strategy (tells us how we should modify our initial pool of solutions based on how they perform on f(x) — elitism -> ensuring that 1. the solution quality obtained by the GA will not decrease from one generation to the next 2. best solutions of a generation will carry over to the next. — genetic algorithms: probabilistic approach decides whether to pick or discard a certain solution resembles natural selection genetic operators (crossover, mutation) recombine and mutate selected members of a certain generation single vs double point crossover vs point mutation: double point single : : : 1110100100 1110101000 p = P- = 11101001000 py = = 1110111000 pooclodio C = P2-00001000 , 1110101010 c 0010100010 2 = = 1100101100 C 00001001000 c = using a population of induviduals to search for solutions indivduals exchange "genetic” material we usually use binary strings/real numbers/ordered list representation constraints used: - penalty term limiting illegal solutions - using specific evolutionary operators that ensure only legal solutions are created selection strategies: - fitness proportional selection -> parents allowed to reproduce themselves are assigned a reproduction probability that is fhfhffhfhfhfhffhfhfhffhfhfhfhfhffhfhfhf based on their fitness -> danger of premature convergence, since good individuals with a much larger fitness value ………. than other individuals can quickly take over the whole population -> little selection pressure if the fitness values all lie close to each other -> if we add some constant to all fitness values, the resulting probabilities will become different, so that similar fitness functions lead to completely different results - tournament selection -> x individuals are selected randomly from the population without replacing -> then the individual of this group of x individuals is used for creating offspring -> very high values of k cause a too high selection pressure and therefore can easily lead to nnnnnnnnnnnnnnnnnnnnnnnn premature convergence. - truncated selection -> best M < N individuals are selected and used for generating offspring with equal probability -> high selection pressure and can lead to premature convergence. -> does not make distinctions between the best and the Mth best individual - rank-based selection -> individuals receive a rank where higher ranks are assigned to better individuals -> this rank is used to select a parent — genetic programming form of evolutionary computation in which the individuals in the evolving population are full computer programs rather than binary strings an extension of Genetic Algorithms demonstrated to produce intriguing results like design of electronic filter circuits or classification of segments of protein molecules uses tree representation -> we use the same genetic operators that define Genetic Algorithms -> we need to define the set of primitive functions e.g. +, √, sin... which is not trivial -> solution is represented by an entire program tree, which can make their evaluation expensive advantages of GA. disadvantages of GA -> the underlying concept is easy to understand -> no convergence guarantees in finite time -> can be used for multi-objective optimization -> computing the evaluation function f (x) can be expensive -> support distributed learning -> lots of implementation parameters need to be defined -> same cooking recipe can be used across large variety of tasks -> termination criteria? -> work well when the problem is not differentiable LECTURE 4 The field of Artificial Life (AL) - studies models and simulations of agents and complex systems. - tries to formalise a problem - using hollistic approach Models — a domain and set of rules/equations describing a phenomenon Simulation using the AL— model to play out the phenomenon over time (epoch) Agents — can be an abstraction of living entities or bots goal of AL - study the principles of life itself (understanding its source and functionality) carbon chauvinism - life can be measured solely based on the carbon compounds important questions for certain fields for AL: Biology: How do living organisms interact in biological processes such as finding/eating food, survival strategies, reproduction? Biochemistry: How can living entities emerge from the interaction of non-living chemical substrates? Sociology: How do agents interact in artificial societies if they have common or com- peting goals? Economy: How do rational entities behave & interact in economical environments such as in stock-markets, e-commerce, etc? Physics: How do physical particles interact in a particular space? Artificial Art: How can we use artificial life to construct computer art? conventional definition of an entity in biology (but not entirely right as it excluded & includes some stuff): has to exhibit -> growth -> metabolism; consuming, transforming and storing energy/mass growing by absorbing and reorganizing mass; …. excreting waste -> motion, either moving itself, or having internal motion -> reproduction; the ability to create entities which are similar to itself -> response to stimuli; the ability to measure properties of its surrounding environment, and act upon certain … l.l.lllconditions AI vs AL: AI -> Goal: imitating and simulating real-life organisms and processes -> Generally bottom-up; complex behavior should follow from a set of low-level rules AL -> Goal: create intelligence of any kind -> Optionally borrows from real-life organisms and processes (both bottom-up and top-down approaches) both use evlutionary computing Baldwin effect - an agent learns by interacting with the environment to increase its fitness. A better learner can obtain a higher ……………………. fitness, although learned knowledge is usually not hard-copied to offspring. Lamarckian learning - an agent’s fitness is evaluated through interacting with the environment. Its fitness directly affects the …………………………….. likelihood of offspring, which retains knowledge throughout generations. Cellular Automata - decentralised spatial systems with a large number of simple, identical components which are locally connected - 2 components: cellular space & transition rule - possible dynamics: — It ends in a stable end-state, with no changes to next time steps — It develops into a cyclic pattern, where a finite time-slice repeats over and over — Chaotic behavior, where patterns don’t repeat, but the configuration keeps on changing — time needed to reach dynamic 1/2 - transient period Game of life: - Three rules lead to Turing-completeness Rule of life: exactly three neighbors Rule of isolation: only one neighbor Rule of overcrowding: four or more neighbors - Logic gates from gliders Gliders can propagate information to create and and not gates which can be combined for universal computation simple rules lead to complex effects - butterfly effect (or chaos theory if you wanna be fancy) Lorenz attractor - starting from a simplification of modelling convection currents, something interesting happened logistic map - we take a 1D iterative function - Given a value of r we keep iterating until we find a stable value — for 0 < r < 1, this function always converges in the limit to 0 — for complex r we get fractals distributed system - perception and computation are not centralized in one device, but many - advantages -> Parallelism: we can run multiple tasks at the same time -> Redundancy: failure in one component does not make the whole system fail -> Safety: a system checking for safety can overrule a system that works towards a goal - robots are also developed as distributed systems -> suppliers work on components themselves -> interact through an API (application programming …. interface) -> researchers monitor modules of the robot during llll. operation -> developers work on separate modules in parallel -> end users often have an app or web interface to lllll..interact with the robot before ROS - Lack of standards - Little to no code reusability - Standard algorithms needed to be reimplemented for specific hardware - Device drivers needed to be updated for changes in the communication protocol - A new robot usually meant to re-code libraries from scratch. ROS: navigation, task executive, visualization, simulation, perception, control, planning, data logging, message passing, device drivers, ….l.l.. real-time capabilities OS: web browser, email client, window manager, memory management, process management, filensystem, device drivers, scheduler definition of ROS: A meta-operating system running on top of the OS - Standards for discovery and communication in distributed systems A package management system that makes compatibility easier Is programming-language agnostic - Provides APIs for multiple languages, such as C++, Python, and Java. ROS is a distributed computing environment which can comprise hundreds of nodes, spread across multiple machines. Depending on how the system is configured, any node may need to communicate with any other node, at any time. has nodes -> single-purpose programmes that either publish or subscribe to information on a certain topic has messages -> timestamped piece of information on a certain topic containing fields of data with specific types, can be nested Communication is asynchronous and many-to-many which allows several behaviors to access sensing and actuation nodes: Your node may subscribe and publish to any relevant topic.| ROS handles translation between programming languages v a behavior to avoid obstacles or reach a navigation goal Standardization of communication also lead to powerful visualization tools: GUI to visualize topics (booleans, images, laser data, navigation paths, etc.) Very useful for developing and understanding bugs features Highly customizable: you can make your own visual markers or borrow from others Insight in your data streams simulation-reality gap: — Algorithms and nodes that work in simulation may depends on assumptions that don’t exist in the real world — Wheels have constant grip — Nodes always produce messages at a constant rate — Information is always up to date — Simplified physics LECTURE 5 consequences of embodiement -> obey laws of physics -> deal with real world -> time and speed -> must use sensors and actuators -> needs energy to sense, think and move sensors -> physical devices to perceive robot’s environment -> what they sense depends on the robot’s task & its ecological/ environmental niche state — description of robot and all known others in its environment at a given point in time (position, orientation, velocity) — external state (environment → sensors) — internal state (inside robot → proprioceptive sensors, — state space consists of all possible states a system (robot and environment) can be in. — types of states: observable, hidden, partially observable (from a perspective of a robot), discrete/continuous — sensor space -> consists of all possible sensor readings a robot can have passive actuation — do not use any kind of motors or actuators — potential energy is the only source of power active actuation — external energy transformed into motion effectors & actuators -> components of a robot that enable to take actions in order to achieve its goals effectors — devices that have an effect on the robot’s environment (e.g., legs, wheels, arms, grippers, fingers) actuators — mechanisms that enable the effector to execute an action or movement (e.g., motors, muscles and tendons) — types: - membranes - electric motors (DC motors, servo motors): DC -> easy to get, easy to use -> convert electrical energy into mechanical energy -> more current → more torque (rotational force) -> proportional to power of motor -> proportional to rotation of shaft Servos -> turn shaft to a specific position, e.g., to control arms or steering -> servos made from DC motors >gear reduction, position sensors, controllers - hydraulics - pneumatics - reactive materials (light, chemical or thermally) combining gears: output gear larger than input gear -> speed decreases, torque increases output gear smaller than input gear -> speed increases, torque decreases why use the controllers - they are the robot’s brain: the robot’s brain combines sensory input with actuator output process sensor input decide what to do control the actuators and effectors degrees of freedom (DOF): the minimum number of coordinates required to completely specify the motion of a mechanical system the amount of DOF a robot has, impacts its ability to interact with its environment translational -> X, Y, Z rotational -> roll, pitch, yaw controllable & uncontrollable DOF: holonomic (CDOF (controllable) =TODF (total)) -> helicopters, drones non-holonomic (CDOF cars, boats redundant (CDOF>TDOF) -> human arm, robotic arm locomotion — the way a body moves from one place to another — the body has to make locomotion possible through actuators and effectors: - Legs → walking, crawling, climbing, jumping, hopping, etc: —> requirements: > large number of DOF / CDOF > at least 2 DOF per leg (lift & swing) > good contact > adapted to environment > stability -> body is stable when center of gravity (CoG) lies within polygon of support -> static - stand still without falling, static and stable -> dynamic - body must actively balance or move to remain stable - Wheels → rolling - Arms → swinging, crawling, climbing - Wings → flying - Flippers → swimming gait -> way in which a robot moves characterised by the sequence of lifting/lowering legs and placing feet on the ground -> number of possible gaits depends on number of legs, possible events, N, for a robot with K legs is N = (2K-1)! -> properties - stability → the robot does not fall - speed → the robot can move quickly - energy efficiency → efficient energy does not use too much energy - robustness → gait can recover from some types of failures - simplicity → the controller for generating the gait is not unwieldy -> stable gaits - statically stable -> slow and energy inefficient, more stable and controllable -> dynamically stable gait -> faster and energy efficient Less stable and controllable fewer legs = more complex locomotion, less complex control, less complex gaits wheels: — simple mechanical elements — efficient — easy to control — high manoeuvrability — we have to consider wheel type & arrangement stability: — minimum number of wheels - two — three wheels are sufficient for stability — all wheels are in contact with ground — manouverability & controllability are important to consider manouverability: - Ackerman steering - designed for steering a car or other vehicles with 4 or more wheels - differential drive - a two-wheeled drive system with independent actuators for each wheel - ability to drive wheels independently - results in ability to manoeuvre complicated paths - omnidirectional robots - swedish wheels - synchronously turning wheels Controllability vs Manoeuvrability - high manoeuvrability requires complex controllability Manoeuvrability vs Odometry - high manoeuvrability results in worse odometry legs: advantages - adaptability, capable of crossing holes, manoeuvrability in rough terrain (only a set of contact points is ………llllllll. needed), can be used to manipulate disadvantages - power, mechanical complexity, high number of CDOFs wheels: advantages - simple, stable, fast, energy efficient disadvantages - needs a relatively hard and flat surface trajectory/motion planning: the process of search for a satisfactory path Locomotion can be used for following a particular path/ trajectory and getting to a particular location Getting somewhere given a certain path is harder than getting there using any possible path! Why follow a specific path - safety, time limit, task requirement Finding the optimal trajectory depends on - task (shortest, safest, fewest bumps) - environment - physical robot constraint -> robot’s body -> steering mechanism (holonomic properties) LECTURE 6 control architecture -> provides guiding principles and constraints for organizing a robot’s control system (its brain) -> makes it possible for a robot to produce desired behaviours computer architecture -> set of principles for designing computers out of a collection of well-understood building blocks centralised control is shit cause if 1 thing breaks the robot stops working Hardware is good for fast and specialized uses, and software is good for flexible, more general programs. algorithm -> process of solving a problem using a finite (not endless) step-by-step procedure to be Turing Universal, a language has to be capable of: sequencing conditional branching iteration Control architectures differ fundamentally in the ways they treat time, modularity and representation: — time: (time scale) - how quickly the robot has to respond to the environment compared with how quickly it can sense and think - Deliberative control looks into the future, so it works on a long time-scale. - Reactive control responds to the immediate, demands of the environment, so it works on a short time-scale. - Hybrid control combines the long time-scale of deliberative control and the short time-scale of reactive control. - Behavior-based control works to bring the time-scales together. — modularity: - the way the control system (the robot’s program) is broken into pieces or components, called modules, and how ……. those modules interact with each other to produce the robot’s overall behavior. - Deliberate control system consists of multiple modules, including sensing, planning, and acting, and the modules do ……. their work in sequence, with the output of one providing the input for the next. (one at a time) - Reactive control - things happen at the same time, not one at a time. Multiple modules are all active in parallel and ……. can send messages to each other in various ways. - In hybrid control, three main modules to the system: the deliberative, reactive and in between working in parallel, at ……. the same time, but also talking to each other. - In behavior-based control, usually more than three main modules that also work in parallel and talk to each other. deliberative control: used when a system has to „think” a lot planning -> process of looking ahead at the outcomes of the possible actions, and searching for the sequence of actions …………….. that will reach the desired goal. search -> inherent part of planning which involves looking through the available representation “in search of” the goal state -> happens in the „brain” of the robot, not in the physical world so stuff is possible even if it’s not in real world optimisation -> process of improving a solution by finding a better one -> steps: search all paths, select the best one, prune the rest of paths large state space makes planning long and difficult steps of architectures (SPA model): sensing planning acting, executing the plan problems: slow as planning takes time takes a lot of space to represent robot’s represetation, generating plans for environment can be very memory-intensive. generating a plan for a real environment requires updating the world model, which takes time. useful only if: The environment does not change during the execution of the plan in a way that affects the plan The robot knows what state of the world and of the plan it is in at all times The robot’s effectors are accurate enough to execute each step of the plan in order to make the next step possible. Deliberative architectures are also called SPA architectures, for sense-plan- act. They decompose control into functional modules which perform different and independent functions (e.g., sense-world, generate-plan, translate- plan-into-actions). They execute the functional modules sequentially, using the outputs of one as inputs of the next. They use centralized representation and reasoning. They may require extensive, and therefore slow, reasoning computation. They encourage open loop execution of the generated plans. grew out of early, classical AI (chess, stuff where strategy is needed) reaction-based control: Purely reactive systems do not use any internal representations of the environment, and do not look ahead at the possible outcomes of their actions but operate on a short time-scale and react to the current sensory information. use a direct mapping between sensors and effectors, and minimal, if any, state information consist of collections of rules that couple specific situations to specific actions, similar to our reflexes complex computation is removed entirely in favor of fast, stored precomputed responses consist of a set of situations (stimuli / conditions) and a set of actions (responses / actions / behaviors) conditions must be mutually exclusive designer thinks of everything since robot in this type of control doesn’t really ”think” action selection -> process of deciding among multiple possible actions or behaviors -> command arbitration — selecting one action or behavior from multiple candidates -> command fusion — combining multiple candidate actions/behaviors into a single output action/behavior multitasking is needed subsumption architecture -> build systems incrementally, from the simple parts to the more complex, all the while using the …………..l………….. already existing components as much as possible in the new stuff being added -> higher layers can temporarily disable one or more of those below them -> outputs of a layer can be inhibited (receives sensory inputs, performs its computation, but ………………………………………….. cannot control any effectors or other modules) or suppressed (receives no sensory inputs, and …………………. so computes no reactions and sends no outputs to effectors or other module) -> we avoid getting bogged down in the complexity of the overall task of the robot -> if any higher-level layers/modules of a subsumption robot fail, the lower-level ones will still …………….. continue to function unaffected. -> strongly coupled connections within layers, and loosely coupled connections between layers -> bottom-up — progresses from the simpler to the more complex, as layers are added incrementally Systems are built from the bottom up. Components are task-achieving actions/behaviors (not functional mod- ules). Components can be executed in parallel (multitasking). Components are organized in layers. Lowest layers handle the most basic tasks. Newly added components and layers exploit the existing ones. Each component provides and does not disrupt a tight coupling between sensing and action. There is no use of internal models; "the world is its own best model." uses tight couplings between perception (sensing) and action to produce timely robotic response in dynamic and unstructured worlds (think of it as "stimulus-response"). Subsumption Architecture is the best-known reactive architecture, but certainly not the only one. uses a task-oriented decomposition of the controller. The control system consists of parallel (concurrently executed) modules that achieve specific tasks (avoid-obstacle, follow-wall, etc.) part of other types of control limitations: minimal (if any) state, no memory, learning and internal models / representations of the world Hybrid control: combines reactive and deliberate control consists of 3 modules -> reactive player, planning layer, layer linking those two together planning layer has to: Compensate for the limitations of both the planner and the reactive system Reconcile their different time-scales Deal with their different representations Reconcile any contradictory commands they may send to the robot. main goal of planning layer -> achieving the right compromise between the deliberative and reactive parts of the system When the reactive system discovers that it cannot do its job, it can inform the deliberative layer about this new development, so can use this information to update its representation of the world to generate more accurate plans. dynamic replanning -> Whenever the reactive layer discovers that it cannot proceed, it gives a signal to the deliberative …………. layer to do some thinking, in order to generate a new plan off-line planning -> takes place while the robot is being developed and does not have much to worry about on-line planning -> a busy robot has to worry about while it is trying to get its job done and its goals achieved universal plan -> set of all possible plans for all initial states and all goals within the state space of a particular system. -> problems: state space is too big for most realistic problems so storing a universal plan is almost impossible if the world changes a new plan has to be generated goals must not change; if they do then rules have to change too domain knowledge -> information about the robot, the task, and the environment put into a system allows the robot to both plan and react. involves real-time reactive control in one part of the system (usually the low level) and more time-expensive deliberative control in another part of the system (usually the high level), with an intermediate level in between. capable of storing representation, planning, and learning Beaviour-based control: involves the use of "behaviors" as modules for control behaviour - sth that achieves and maintains a goal, more complex than actions, takes inputs and sends outputs to effectors, is not instantaneous but time-extended, more expressive than simple reactive rules behaviours can have different levels of abstraction (levels of detail or description) typycally executed in parallel, like in reactive systems, in order to enable the controller to respond immediately when needed. networks of behaviors are used to store state and to construct world models/representations. when assembled into distributed representations, behaviors can be used to store history and to look ahead into the future. behaviors are designed so that they operate on compatible time-scales. key properties -> ability to react in real-time -> ability to use representations to generate efficient (not only reactive) behavior -> ability to use a uniform structure and representation throughout the system (with no intermediate layer(s)). behavior-based controllers are networks of internal behaviors which interact (send messages to each other) in order to produce the desired external, observable, manifested robot behavior. interaction dynamics -> patterns and history of interaction and change representation is distributed between the whole behaviour structure we distribute parts of a map into different behaviours and then connect the parts of the map kidnapped robot problem -> sb moved the robot so it got confused use behaviors as the underlying modularity and representation. enable fast real-time responses as well as the use of representation and learning. use subsumption (bottom-up) architecture use distributed representation and computation over concurrent behaviors. alternative to hybrid systems, with equal expressive power. require expertise to be programed correctly just like any other control approach LECTURE 7 symbol grounding problem -> problem of how to ground the meanings of symbol tokens in anything different than other symbols physical symbol grounding -> grounding of symbols to real world objects by a physical agent interacting in the real world -> form, referent, meaning meaning Fiber the mean social symbol grounding -> collective negotiation for the selection of shared symbols (words) and their grounded meanings in …………………………… (potentially large) populations of agents referential indeterminacy problem -> unknown word can, theoretically, refer to an infinite number of objects large language models: training machines on huge data sets often outperform human performance future either promising or disturbing for us but they lack: physical experience in the real world, incremental social learning, true understanding of the world only humans can attach meaning to the words language models — computer programmes which can interpret and produce language computational modeling approach to evolution of language: what kind of mechanisms are needed to create a language from a certain set of symbols studying how a group of agents can develop a shared repertoire of symbols from scratch (no symbols or language at the beginning) language is a global structure & a dynamical, adaptive system part of the chaos theory for humans language is out there and we learn it from others, we learn it socially language games: cultural exchange of linguistic utterances individual learning of meaning representations social learning of real-world mapping feature extraction: colour representation shape feature location in visual field size result -> feature vector discrimination games: to form meaningful representations protocol for meaning creation & formation -> categorisation of f-vectors with nearest prototypes -> discrimination - distinguishing targets from other observed objects -> adaptation - move successfully discriminated p-types to observed f-vector -> for unsuccessful discr new category is made where f-vector is a new prototype lexicon -> association matrix connecting words with meanings where strength of a word-meaning association has a certain weight -> encoding a meaning - process of looking for a strongest association for a given meaning -> decoding - finding meaning for the strongest association for a given word -> adaptation - adding association to a lexicon, reinforcing weight of a successful associations and inhibiting unsuccessful communicative success -> number of successful games in the past 50 language games phases of a language game -> invention (make words for meanings) , alignment (creating conventions), stable self organisation -> local 1-1 interactions with positive feedback loop, emergent structure and complex dynamics LECTURE 8 sensors - physical devices measuring physical quantities sensor fusion - combining sensors together types of sensors: (both constitute the perceptual system) proprioceptive -> perceive elements of the robot’s internal state (state of its own body) exteroceptice -> perceive elements of the state of the external world around the robot robot sources of uncertainty: Sensor noise and errors Sensor limitations Effector and actuator noise and errors Hidden and partially observable state Lack of prior knowledge about the environment, or a dynamic and changing environment. signal-to-symbol problem -> sensors produce signals, while an action is usually based on a decision involving abstract symbols sensor preprocessing -> processing sensor signals to extract information which a robot needs perception -> requires: sensors, computation and connectors -> types: Action-oriented perception / active sensing: instead of trying to reconstruct the world in order to decide what to do, ……………….. robots use the knowledge about the task to look for particular stimuli in the environment and respond accordingly. Expectation-based perception: use knowledge about the robot’s environment to help guide and constrain how sensor ………. data can be interpreted. Task-driven attention: direct perception where more information is needed / likely to be provided. Instead of having ……. the robot sense passively as it moves, move the robot or its sensors to sense in the direction where information is ……. most needed or available. Perceptual classes: Divide up the world into perceptual categories that are useful for getting the job done. levels of sensor processing: computation electronics signal processing calibration - process of adjusting a mechanism so as to maximize its performance ways of measuring speed: Encode and measure the speed of a driven wheel Encode and measure the speed of a passive wheel (caster) that is dragged by the robot. — Sensors can be classified into active and passive, simple and complex. — Switches may be the simplest sensors, but they provide plenty of variety and have a plethora of uses, including detecting ….. contact, limits, and turning of a shaft. — light sensors come in various forms, frequencies, and uses like photocells, reflective sensors, polarized light and IR sensors. — Modulation of light makes it easier to deal with ambient light and to design special-purpose sensors. — There are various ways to set up a break beam sensor, but they are most commonly used inside motor shaft encoders. — Resistive position sensors (potentiometers) can detect bending and are used in a variety of analog tuning devices. passive vs active: active emmit a signal as they also have an emitter (both types have a detector) Passive sensors measure physical properties of the environment, without direct interaction active sensors: — provide their own signal/stimulus (and thus typically require extra energy) and use the interaction of that signal with the lllllllllllllllenvironment as the property they measure. — consist of an emitter and a detector - the emitter emits the signal, and the detector detects it. - results from physical contact of an object with the switch simple vs complex: complex are multidimensional simple sensors: — measure physical properties from the environment — have a detector switches: open - no current, closed - current flows used in: Contact sensors detect when the sensor has contacted another object (e.g., triggered when a robot hits a wall). Limit sensors detect when a mechanism has moved to the end of its range (e.g., they trigger when a gripper is wide open). Shaft encoder sensors detects how many times a motor shaft turns by having a switch click every time the shaft turns. - quadrature shaft encoding -> the mechanism for detecting and measuring direction of rotation -> 2 encoders aligned that their inputs coming from the detectors are 90 deg out of phase. -> comparing outputs of the encoders at each time step with the output of the previous one ………………………………………………….. tells if there is a direction change. -> only one encoder can change its state at a time, which one does it determines in which ………………………………………………….. direction the shaft is rotating. -> used in robot arms with complex joints, such as the ball-and-socket joints - cartesian robots -> similar in principle to Cartesian plotter printers -> usually employed for high-precision assembly tasks. -> an arm moves back and forth along an axis or gear. Bump sensors Light sensors Potentiometers -> resistive sensors -> turning the knob or pushing a slider effectively alters the resistance of the sensor. -> tab that slides along a slot with fixed ends, as it is moved, the resistance between it and each of the ends ………………. of the slot is altered, but the resistance between the two ends remains fixed. -> used to tune the sensitivity of sliding and rotating mechanisms & to adjust the properties of other sensors reflective optosensors -> The emitter and the detector are side by side, separated by a barrier; the presence of an object is ……… detected when the light reflects from it and back into the detector. break beam sensors -> the emitter and the detector face one another; the presence of an object is detected if the beam of …… light between the emitter and the detector is interrupted or broken. accelerometer -> measures acceleration using Newton’s second law of motion, F = ma. -> a calibrated weight connected to a spring moves inside the device -> when the device accelerates, the position of the mass delays and is tracked -> used for: velocity measurements (integration), orientation or position estimation (start and end of motion) transducer - device transforming a form of energy to another complex sensors: provide much more info & require way more processing ultrasonic (sonar) sensors: measure the time it takes sound to travel emitter produces a chirp or ping of ultrasound frequency. The sound travels away from the source and, if it encounters a …….. barrier, bounces off it, and perhaps returns to the receiver. If there is no barrier, the sound does not return; the sound wave …….. weakens (attenuates) with distance and eventually breaks down. If sounds come back, the time taken to return is used to calculate the distance between the emitter and the barrier a timer is started when the chirp is emitted, and is stopped when the reflected sound returns. The resulting time is then ……….. multiplied by the speed of sound and divided by 2 as we only want a 1-way distance relatively high-power problem - specular reflection -> reflection from the outer surface of the object -> sound wave traveling from the emitter bounces off multiple surfaces in the environment …..before returning to the detector - solution -> phased arrays of sensors, making surfaces less smooth, use action-oriented perception lasers: emit highly amplified and coherent radiation at one or more frequencies fast as light has a huge speed, so we also use phase-shift measurements, rather than time-of-flight to compute the distance involve higher-power electronics -> larger and more expensive. much (much, much) more accurate high resolution (process of separating or breaking something into its constituent parts) visual sensing: requires by far the most processing and provides by far the most useful information cameras edge detection: edge - a curve in the image plane across which there is a significant change in the brightness segmentation - process of dividing or organizing the image into parts that correspond to continuous objects model-based vision: recognizing stuff based on models which can vary in the level of how complex they are motion vision: robot observes static stuff while it is moving stereo vision: ability to use combined points of view from 2 eyes/cameras to reconstruct 3D solid objects and to perceive depth use of textures, shading, contour etc. suiting vision for robots: look for specifically and uniquely colored objects, and recognize them that way use the combination of color and movement (color blob tracking) using a small image plane combining vision with other small & fast sensors use knowledge about the environment machine vision — questions related to recognition, such as “Who is that?” and “What is that?” robot vision — questions related to action, such as “Where do I go?” and “Can I grab that?”