🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Week3-lecture.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Software Development Company Simulation Prasara Jakkaew, Ph.D. Software Development Lifecycle (SDLC) Understanding the stages of SDLC: requirements gathering, design, development, testing, deployment, and maintenance. Familiarity w...

Software Development Company Simulation Prasara Jakkaew, Ph.D. Software Development Lifecycle (SDLC) Understanding the stages of SDLC: requirements gathering, design, development, testing, deployment, and maintenance. Familiarity with various SDLC models like Agile, Waterfall, Scrum, etc. Project Management Basics of project management, including planning, scheduling, risk management, and resource allocation. Use of tools like JIRA, Trello, or Microsoft Project for managing tasks and timelines. Team Collaboration Knowledge and Effective communication within the team and with stakeholders. Roles and responsibilities in a software team: developers, testers, project managers, UX/UI skills designers, etc. Collaboration tools like Slack, GitHub, or Confluence. Software Development Tools Familiarity with version control systems (e.g., Git), integrated development environments (IDEs), and continuous integration/continuous deployment (CI/CD) pipelines. Use of coding languages, frameworks, and libraries relevant to the project. Company Types and Their Impact on Development Understanding different types of software companies: Product-Based Companies, IT Service Providers, Startups, Outsourcing, and Offshore Development Centers. How the type of company influences project goals, client interactions, and development processes. Business and Market Awareness Identifying market needs and customer requirements. Creating a product that aligns with business goals and market demands. Understanding the importance of user-centered design and user experience (UX) principles. Ethical Considerations in AI Development Addressing ethical issues related to AI, including data privacy, bias, and transparency. Developing responsible AI systems that adhere to ethical guidelines. Knowledge and Creativity and Innovation Encouraging innovative thinking in product development. skills Exploring how AI can be leveraged to create personalized solutions, such as healthcare recommendations. Evaluation and Feedback Methods for evaluating software quality, including testing strategies and peer review. Importance of feedback loops in improving both the product and the development process. Presentation and Documentation Creating comprehensive project documentation, including project proposals, requirements documents, and user manuals. Presenting the final product to stakeholders, including a clear demonstration of features and benefits. Ethical Considerations in AI Development Transparency Privacy and Bias and and Accountability Data Fairness Explainability Protection Autonomy and Ethical Use of Human Security Consent AI Oversight Long-Term Regulatory Impacts Compliance What is AI Bias? Artificial Intelligence models and their predictions are heavily determined by the data used for training. If the data is not high-quality and scrubbed, it can include biases. Skewed datasets comprise incomplete, inaccurate, and inappropriate training data that produce poor decisions. Therefore, the data fed into the AI engine can influence the AI model and potentially introduce bias at any stage of the machine learning lifecycle. Then, if the model is trained on those biases, it can learn to incorporate them into its algorithm and use it when making predictions or decisions. So, a fair AI model requires high-quality training sets of accurate, complete, consistent, valid, and uniform data. AI bias examples In 2009, Nikon launched the Coolpix digital compact cameras equipped with the new blink detection feature. While snapping a picture of a person, the blink detection feature highlights the face on the screen with a yellow box. After the photo was captured, a warning “Did someone blink?” was displayed if someone’s eyes weren’t fully open, so another image could be taken if required. However, this feature was inaccurate in detecting blinks for Asian consumers, leading many to state that the feature was biased. AI bias examples In 2015, Google’s Photos application mistakenly labeled a photo of a black couple as gorillas. Unfortunately, Google’s AI categorization and labeling feature was another example of how AI internalizes biases. Nevertheless, users were able to remove the incorrectly identified photo classification within the application, helping improve its accuracy over time. What is AI Fairness? Fairness in AI is a growing field that seeks to remove bias and discrimination from algorithms and decision-making models. Machine learning fairness addresses and eliminates algorithmic bias from machine learning models based on sensitive attributes like race and ethnicity, gender, sexual orientation, disability, and socioeconomic class. How to Make Machine Learning Fairer? Ensure the use of diverse and high-quality training data in the model. Identify any vulnerabilities in public data sets. Vulnerabilities may result from poor-quality data sets, like misaligned and mislabeled datasets and inconsistent benchmarking. Use less sensitive information during the model training process to avoid privacy issues. Utilize tools that can help prevent and eliminate bias in machine learning, like IBM’s AI Fairness 360, Google’s What-If Tool, Model Cards, Toolkit, Microsoft’s fairlearn.py, and Deon. What is AI transparency? AI transparency means understanding how artificial intelligence systems make decisions, why they produce specific results, and what data they’re using. Simply put, AI transparency is like providing a window into the inner workings of AI, helping people understand and trust how these systems work. According to our CX Trends Report, 65 percent of CX leaders see AI as a strategic necessity, making AI transparency a crucial element to consider. AI AI transparency involves understanding its ethical, legal, and societal implications and transparency how transparency fosters trust with users and stakeholders. According to our CX Trends Report, 75 percent of businesses believe that a lack of transparency could lead to increased customer churn in the future. Because AI as a service (AIaaS) providers make AI technology more accessible to businesses, ensuring AI transparency is more important than ever. Ethical implications of AI The ethical implications of AI means making sure AI behaves fairly and responsibly. Biases in AI models can unintentionally discriminate against certain demographics. For example, using AI in the workplace can help with the hiring process, but it may inadvertently favor certain groups over others based on irrelevant factors like gender or race. Transparent AI helps reduce biases to sustain fair results in business use cases. Legal Implications of AI The legal Implications of AI involve ensuring that AI systems follow the rules and laws set by governments. For instance, if an AI-powered software collects personal information without proper consent, it can violate privacy laws. Creating laws that emphasize transparency in AI can ensure compliance with legal requirements. Societal implications of AI The societal implications of AI entail understanding how AI affects the daily lives of individuals and society as a whole. For example, using AI in healthcare can help doctors make accurate diagnoses faster or suggest personalized treatments. However, it can raise questions about equitable access based on the technology’s affordability. AI transparency requirements Interpretability Explainability Accountability Transparency Explainability Explainable AI (XAI) refers to the ability of an AI system to provide easy-to- understand explanations for its decisions and actions. For example, if a customer asks a chatbot for product recommendations, an explainable AI system could provide details such as: “We think you’d like this product based on your purchase history and preferences.” “We’re recommending this product based on your positive reviews for similar items.” Explainability Offering clear explanations gives the customer an understanding of the AI’s decision- making process. This builds customer trust because consumers understand what’s behind the AI’s responses. This concept can also be referred to as responsible AI, trustworthy AI, or glass box systems. On the flip side, there are black box systems. These AI models are complex and provide results without clearly explaining how they achieved them. This lack of transparency makes it difficult or impossible for users to understand the AI’s decision-making processes, leading to a lack of trust in the information provided. Interpretability Interpretability in AI focuses on human understanding of how an AI model operates and behaves. While XAI focuses on providing clear explanations about the results, interpretability focuses on internal processes (like the relationships between inputs and outputs) to understand the system’s predictions or decisions. Let’s use the same scenario from above where a customer asks a chatbot for product suggestions. An interpretable AI system could explain that it uses a decision tree model to decide on a recommendation. Accountability Accountability in AI means ensuring AI systems are held responsible for their actions and decisions. With machine learning (ML), AI should learn from its mistakes and improve over time, while businesses should take suitable corrective actions to prevent similar errors in the future. Accountability Say an AI chatbot mistakenly recommends an item that’s out of stock. AI chatbot The customer attempts to purchase the product because they believe it’s available, but they are later informed that the item is temporarily out of stock, leading to frustration. The company apologizes and implements human oversight to review and validate critical product-related information before bots can communicate it to customers. Accountability This example of accountability in AI for customer service shows how the company took responsibility for the error, outlined steps to correct it, and implemented preventative measures. Businesses should also perform regular audits of AI systems to identify and eliminate biases, ensure fair and nondiscriminatory outcomes, and foster transparency in AI. Levels of AI transparency There are three levels of AI transparency, starting from within the AI system, then moving to the user, and finishing with a global impact. The levels are as follows: Algorithmic Interaction Social transparency transparency transparency Algorithmic transparency Algorithmic transparency focuses on explaining the logic, processes, and algorithms used by AI systems. It provides insights into the types of AI algorithms, like machine learning models, decision trees (flowchart-like models), neural networks (computational models), and more. It also details how systems process data, how they reach decisions, and any factors that influence those decisions. This level of transparency makes the internal workings of AI models more understandable to users and stakeholders. Interaction Interaction transparency deals with the communication and interactions between transparency users and AI systems. It involves making exchanges more transparent and understandable. Businesses can achieve this by creating interfaces that communicate how the AI system operates and what users can expect from their interactions Social Social transparency extends beyond transparency the technical aspects and focuses on the broader impact of AI systems on society as a whole. This level of transparency addresses the ethical and societal implications of AI deployment, including potential biases, fairness, and privacy concerns. Regulations and standards of transparency in AI Because artificial intelligence is a newer technology, the regulations and standards of transparency in AI have been rapidly evolving to address ethical, legal, and societal concerns. General Data Protection Regulation (GDPR): established by the European Union (EU) and includes provisions surrounding data protection, privacy, consent, and transparency Organisation for Economic Co-operation and Development (OECD) AI Principles: a set of value-based principles that promotes the trustworthy, transparent, explainable, accountable, and secure use of AI U.S. Government Accountability Office (GAO) AI accountability framework: a framework that outlines responsibilities and liabilities in AI systems, ensuring accountability and transparency for AI-generated results EU Artificial Intelligence Act: an act proposed by the European Commission that aims to regulate the development of AI systems in the EU These regulations can standardize the use and development of AI, locally and globally. AI systems can be consistently more clear and trustworthy by emphasizing transparency, ethical considerations, and accountability. Builds trust with users, customers, and stakeholders Promotes accountability and responsible use of AI The benefits of AI Detects and mitigates data biases and discrimination transparency Improves AI performance Addresses ethical issues and concerns Challenges of transparency in AI (and ways to address them) Keeping data secure How to handle this challenge: Appoint at least one person on the team whose primary responsibility is data protection. Brandon Tidd, the lead Zendesk architect at 729 Solutions, says that “CX leaders must critically think about their entry and exit points and actively workshop scenarios wherein a bad actor may attempt to compromise your systems.” Challenges of transparency in AI (and ways to address them) Explaining complex AI models How to handle this challenge: Develop visuals or simplified diagrams to illustrate how complex AI models function. Choose an AI-powered software with a user-friendly interface that provides easy-to-follow explanations without the technical stuff. Challenges of transparency in AI (and ways to address them) Maintaining transparency with evolving AI models How to handle this challenge: Establish a comprehensive documentation process that tracks the changes made to an AI ecosystem, like its algorithms and data. Provide regular and updated transparency reports that note these changes in the AI system so stakeholders are informed about these updates and any implications. Data Privacy vs. Data Security vs. Data Protection Data Privacy Data Security Data Protection Protecting data against Covers data Ensuring proper use of unauthorized access, availability, personal data by giving use or destruction by immutability, individuals control over implementing preservation, how their data is appropriate technical deletion/destruction, accessed, used, or controls, mechanisms and “data privacy” and shared. and procedures “data security.” Data Privacy and data security Data Privacy focuses on the rights of individuals, the purpose of data collection and processing, privacy preferences, and the way organizations govern personal data. It focuses on how to collect, process, share, archive, and delete the data under the law. Data security includes a set of standards, safeguards, and measures an organization takes to prevent any third party from unauthorized access to digital data or any intentional or unintentional alteration, deletion, or disclosure of data. Example No. 1: Data Privacy Example.com sells unique products via its eCommerce shop and it collects many pieces of data from its online shoppers such as: Email addresses and Shipping Billing log-in details addresses addresses Example No. 1: Data Privacy To ensure proper handling of personal data and to give individuals control over access to and sharing of their data, Example.com does the following: It allows its customers to unsubscribe from its email marketing&newletter list. It does not disclose its customers’ email addresses and purchases data to data brokers without getting its customers’ consent. It stores customers’ purchase information in accordance with data storage periods determined by applicable laws. These efforts are all part of Example.com’s data privacy strategy. Example No. 2: Data Security The executives recently decided to update Example.com’s data security policy. As a result, they hired a data security analyst who brought to their attention that more staff members had access to shoppers’ information than was necessary — weakening the company’s overall data security. After reviewing which staff members needed access to this information, they reduced the number of “need to know” players from 26 to only seven. In addition, they allowed an outlet for some other members to request access under special circumstances. By reducing the number of staff members who could access shoppers’ data by nearly three- quarters, Example.com significantly strengthened its data security plan. Autonomy and Consent User Autonomy: AI systems should respect the autonomy of users, particularly in healthcare, where patients have the right to make informed decisions about their care. Informed Consent: Users should be informed about how their data will be used and should provide consent for its use in AI systems. Long-Term Effects of Early AI Exposure on Kids’ Cognitive Development How AI technologies will affect children’s development Children becoming more and more familiar with digital tools, AI in early childhood education is becoming increasingly prevalent. Children have access to many more avenues of digital interaction than ever before. When talking about “early” exposure, we generally mean AI tool usage between the ages of three and eight. The uses of AI tools by children Educational Apps and Games Adaptive Learning Platforms: Tools such as DreamBox or Khan Academy provide personalized learning options to students, adapting the difficulty of questions and topics based on the child’s skill and learning pace. Language Learning: Apps like Duolingo use AI to adjust language lessons to the child’s performance and ability. Interactive Storytelling AI-Powered Story Creators: Apps like Shorebird and AI Dungeon help children create their own stories, with generative AI assisting in plot ideation, character development, and dialogue. Voice-Activated Storytelling: Devices such as Amazon Echo or Alexa can tell interactive stories, allowing children to choose their own paths and endings. Virtual Tutors Homework Assistance: AI-powered tools like Socratic and Brainly can explain difficult concepts to children, giving them extra tutoring on difficult homework assignments. The uses of AI tools by children Creative Arts Art and Music Creation: Platforms like DoodleLens and AI Duet help children create art and music by suggesting artistic styles or musical accompaniments based on their commands. Social and Emotional Learning AI Companions: Apps like Replika and Woebot stimulate emotional and social development by engaging in conversation and providing emotional support to children. How heavy usage of these tools at a young age might affect a child’s cognitive abilities? Cognitive Development Traits and Skills How using these tools might affect their brains’ ability to grow and progress naturally and healthily. Cognitive traits like social skills, critical thinking, language, and emotional and physical development are most likely to be affected due to the nature in which children are using AI technology. Social Skills Although AI software provides plenty of advantages to children, overreliance on these tools could reduce real-world face-to-face interactions, potentially limiting opportunities to practice social skills with other people. AI cannot fully replicate human emotional responses. Without these interactions, social skills like empathy and communication could suffer due to less engagement with their peers. On the other hand, using intuitive interfaces and speech-based commands with AI can teach children how to interact with digital content and environments, enhancing their digital literacy. Social Skills However, this should be balanced with traditional learning methods to maintain access to healthy social interactions. This thoughtful integration of AI tools into a child’s education helps to develop both their digital fluencies and their interpersonal skills. Critical Thinking and Problem-Solving Using AI for personalized learning can profoundly affect children, particularly those with learning disabilities like ADHD and dyslexia. These platforms, such as Knewton, DreamBox, Smart Sparrow, offer students the time and space they need to solve problems or comprehend literature without the stresses of a class that moves too quickly or too slowly. Critical Thinking and Problem-Solving AI can support critical thinking by presenting complex problems and simulations, but it may limit opportunities for real-world problem-solving experiences. For example, an AI platform can simulate a historical event or a science experiment to teach a child a certain subject, but it lacks the hands-on engagement that would come from a live debate with classmates or a physical classroom experiment. Critical Thinking and Problem-Solving This could potentially create a gap in a student’s problem-solving skills, leaving them less prepared to navigate the nuanced challenges of real-world scenarios. The solution to this is to integrate these tools into existing structures to bridge this gap, building both theoretical and practical problem-solving abilities in children. Verbal and Language Skills Language-based AI models like Rosetta Stone and Lingokids can help children improve their language acquisition and communication. By providing personalized feedback, such as notes on correct vocabulary use and accent reduction, these tools can enhance vocabulary, pronunciation, and language comprehension, improving overall language skills and increasing the likelihood of learning a second language. Verbal and Language Skills When children rely too much on these technologies, they may have fewer opportunities for real-life conversation, leading to lower communication scores and potentially underdeveloped cultural understanding. Emotional Development Research indicates that overuse of AI technologies can significantly impact a child’s emotional development due to the fact that AI technologies lack the depth of true human interaction. AI technologies cannot replicate the authenticity or complexity of human emotions, and studies in the International Journal of Innovative Science and Research Technology (IJISRT) show that relying on these tools for emotional regulation can impede a child’s ability to develop self-soothing techniques and emotional resilience. Emotional Development However, AI can help support children with emotional difficulties by providing access to consistent, non-judgmental interaction. AI tools can provide children who experience emotional or behavioral challenges with personalized support in the form of comfortable and predictable interaction patterns, building their confidence and aiding healthy emotional development. Physical Development The use of digital tools has long been associated with improved hand-eye coordination, fine motor skills, and muscle memory. AI-based platforms often feature activities that require children to track moving objects, catch or throw virtual items, and manipulate small objects. Repeating these actions reinforces neural pathways to facilitate better motor control over time and help to improve dexterity and control. A study by SBIR suggests that AI platforms have been notably helpful in improving motor skills and eye contact in children with autism. Physical Development However, excessive sedentary behavior in children can have several long-term impacts on their physical health and development. Research performed by MDPI shows that prolonged sedentary behavior contributes to obesity and increases fat mass. This, in turn, heightens the risk of the child developing chronic conditions, including cardiovascular disease, type 2 diabetes, and certain cancers that occur later in life. Additionally, high levels of sedentary behavior are linked to poorer cognitive function, decreased academic performance, and various psychosocial problems, including lower self- esteem and increased risk of depression. Physical Development UNICEF recommends using augmented reality games and other technology- based physical activities to negate these negative effects, promoting more active lifestyles among children and mitigating the health risks associated with excessive sedentary behavior​​. Positive and negative long-term effects Research suggests that early exposure to AI in children can result in both positive and negative long-term effects. Balancing the potential benefits of these tools against their associated risks will lead to a better understanding of how children should interact with AI technologies. Positive and negative long-term effects Positive Accessible, Personalized Learning Development of Tech-Savvy Skills Enhanced Cognitive Skills Global Access to Education Intelligent Tutoring Systems Negative Dependency on Technology Reduced Human Interaction Susceptibility to Biases

Tags

software development AI bias project management technology
Use Quizgecko on...
Browser
Browser