1-2.docx
Document Details

Uploaded by BrainiestLithium
London Metropolitan University
Full Transcript
Artificial Intelligence (AI) is a branch of computer science that aims to create systems capable of performing tasks that typically require human intelligence. These tasks include understanding natural language, recognizing patterns, learning from experience, reasoning, and making decisions. AI syst...
Artificial Intelligence (AI) is a branch of computer science that aims to create systems capable of performing tasks that typically require human intelligence. These tasks include understanding natural language, recognizing patterns, learning from experience, reasoning, and making decisions. AI systems are designed to mimic cognitive functions associated with human minds, such as learning from data, adapting to new situations, and solving problems. They rely on algorithms and computational models to process large amounts of data, identify patterns, and make predictions or decisions based on that information. There are various approaches to AI, including symbolic AI, which involves programming computers to manipulate symbols based on predefined rules, and machine learning, which enables systems to learn from data without being explicitly programmed. Deep learning, a subset of machine learning, utilizes artificial neural networks with many layers (hence "deep") to analyse` complex data and extract meaningful insights. Ethical Problems Bias and Discrimination: AI systems can perpetuate or even exacerbate biases present in their training data, leading to unfair treatment of certain groups. Privacy Violations: AI technologies can intrude on individuals’ privacy by collecting, storing, and analyzing vast amounts of personal data without consent. Lack of Accountability: It can be unclear who is responsible for the actions of AI systems, especially when they cause harm or operate in unexpected ways. Dehumanization: Over-reliance on AI can lead to dehumanization in various sectors, such as customer service and caregiving, by replacing meaningful human contact. Approach to Limit Impact: Implementing rigorous ethical guidelines, promoting transparency in AI algorithms and data usage, and ensuring that AI systems are regularly audited for bias and compliance with privacy laws. Legal Problems Intellectual Property Issues: Determining the ownership of AI-generated content or inventions can be complex. Liability for Harm: Legal challenges arise in assigning liability when AI systems cause damage or harm, whether physically, financially, or emotionally. Compliance with International Laws: AI systems operating across borders must navigate varying international regulations, which can be problematic. Approach to Limit Impact: Crafting clear legal frameworks that define the rights and responsibilities associated with AI outputs, and establishing international agreements to manage cross-border AI interactions. Social Problems Job Displacement: AI can lead to the displacement of workers, especially in sectors like manufacturing and administrative jobs, potentially increasing unemployment rates. Erosion of Human Skills: Overdependence on AI can lead to a decline in certain human skills, particularly those related to decision-making and problem-solving. Social Manipulation: AI can be used to manipulate social and political scenarios, such as through deepfakes or algorithmically curated content that can influence public opinions and elections. Approach to Limit Impact: Developing policies that support workforce transition through retraining programs, maintaining balances in human-AI roles to preserve essential skills, and regulating the use of AI in sensitive areas such as media and political campaigns. Agents and Environments: In the context of artificial intelligence, an agent is anything that can perceive its environment through sensors and act upon that environment through actuators. An environment is everything outside the agent that can be sensed and affected by the agent's actions. Agents and environments interact continuously, with the agent receiving input from the environment through sensors and producing output through actuators. Rationality: Rationality in AI refers to the ability of an agent to select actions that maximize its expected performance measure, given its knowledge and beliefs about the world. A rational agent is one that makes decisions that are expected to lead to the best outcome, based on its understanding of the environment and its goals. PEAS: PEAS stands for Performance measure, Environment, Actuators, and Sensors. It is a framework used to define the design specifications of an intelligent agent. Performance measure: This specifies what the agent is trying to achieve or optimize. It defines the criteria for success. Environment: This describes the context in which the agent operates, including the objects, entities, and conditions that the agent can sense and affect. Actuators: These are the mechanisms through which the agent can take actions or manipulate the environment. Sensors: These are the mechanisms through which the agent can perceive or gather information about the environment. Environment Types: Environments in AI can be categorized into different types based on their characteristics: Fully observable vs. partially observable: In a fully observable environment, the agent's sensors provide complete information about the state of the environment. In a partially observable environment, some aspects of the environment may be hidden from the agent's sensors. Deterministic vs. stochastic: In a deterministic environment, the next state of the environment is completely determined by the current state and the actions taken by the agent. In a stochastic environment, there is some randomness or uncertainty in the transition between states. Episodic vs. sequential: In an episodic environment, the agent's actions only affect the immediate reward or outcome, and each episode is independent of previous episodes. In a sequential environment, the agent's actions have long-term consequences, and the current state depends on previous states and actions. Static vs. dynamic: In a static environment, the environment does not change while the agent is deliberating. In a dynamic environment, the environment can change while the agent is acting. Agent Types: Agents can also be classified into different types based on their design and behavior: Simple reflex agents: These agents select actions based solely on the current percept (input) and predefined rules or mappings from percepts to actions. Model-based reflex agents: These agents maintain an internal model of the world and use it to plan and reason about actions. Goal-based agents: These agents have explicit goals or objectives and take actions that are expected to achieve those goals.