The Concept of Safety in Aviation PDF

Summary

This document discusses the concept of safety in aviation, covering hazard identification, risk management, and the three eras of safety (technical, human, and organizational factors). It examines how accidents occur and includes examples of past incidents. The text considers human factors and organizational elements contributing to aviation safety.

Full Transcript

THE CONCEPT OF SAFETY EVOLUTION OF SAFETY Three Eras Of Safety What is Safety in Aviation? “the state in which the possibility of harm to persons or of property damage is reduced to, and maint...

THE CONCEPT OF SAFETY EVOLUTION OF SAFETY Three Eras Of Safety What is Safety in Aviation? “the state in which the possibility of harm to persons or of property damage is reduced to, and maintained at or below, an acceptable level through a continuing process of hazard identification and safety risk management.” Key Elements: Hazard Identification Risk Management Technical Factors ❖ It is recognized that the aviation system - from the early 1900s until the late 1960s cannot be completely free of hazards and associated risks. Focus: Technological factors and failures. ❖ Safety is a dynamic characteristic of the Technological improvements reduced aviation system, whereby safety risks must accident frequency. be continuously mitigated. Regulatory compliance and oversight started expanding. ❖ It is important to note that the acceptability of safety performance is often influenced by domestic and international norms and Structural Failures: culture. In the early 1900s, aircraft were constructed from wood and fabric, which were prone to structural ❖ As long as safety risks are kept under an failures. As a result, planes would often break apart appropriate level of control, a system as mid-flight due to weak materials. open and dynamic as aviation can still be managed to maintain the appropriate Improvement: balance between production and protection. The shift to metal fuselages and the development of stronger, more durable aircraft materials like aluminum improved safety significantly. Jet Engine Development The introduction of jet engines in the 1940s revolutionized air travel, but early jet engines were prone to mechanical failures. Example Incident: The de Havilland Comet, the first commercial jetliner, suffered catastrophic mid-air breakups in the early 1950s due to metal fatigue in the fuselage, a flaw in the plane’s pressurization design. Improvement: After a series of accidents, Organizational Factors engineers refined the structural design of jetliners, - from the mid-1990s to the present day leading to stronger airframes and a significant reduction in such incidents. Focus: Shift to a systemic view of safety. Organizational factors (e.g., culture, Human Factors policies) became central. - from the early 1970s until the mid-1990s Introduced the concept of organizational Focus: accidents. Focus shifted to human factors (e.g., Transitioned from reactive to proactive human-machine interface). safety management. Safety became more about preventing human errors. Systemic Accidents: Initially focused on individuals without Example Incident: The 2009 crash of Air France considering the complex environment. Flight 447 was attributed not only to pilot error but also to organizational issues, such as insufficient pilot training on how to handle high-altitude stalls. Pilot Error: This crash highlighted the need for better training Example Incident: The 1977 Tenerife Airport programs and oversight. Disaster involved two Boeing 747s colliding on the runway due to communication failures and Organizational Impact: The airline industry began misunderstandings between the pilots and air emphasizing the importance of Safety Management traffic control. This remains the deadliest aviation Systems (SMS), requiring organizations to actively accident in history. identify and mitigate risks. Improvement: This accident led to the development of Crew Resource Management Proactive Data Collection: (CRM), a Today, airlines use data-driven methods to identify training system designed to improve teamwork, and address emerging safety risks before they communication, and decision-making in the result in accidents. cockpit. Example: Airlines routinely collect and analyze data from flight data recorders (black boxes) and Fatigue and Human Performance: monitor trends through Flight Operations Quality Research in the 1970s and 80s identified pilot Assurance (FOQA) programs to detect and correct fatigue as a significant risk factor for accidents. unsafe practices before they lead to incidents. Example Incident: The 1990 British Airways Flight 5390 had a near-fatal accident when the cockpit windshield blew out due to improper installation. The crew's ability to manage the emergency under pressure highlighted the need to address human limitations like fatigue and stress. ACCIDENT CAUSATION IN AVIATION: THE Latent Conditions: SWISS-CHEESE MODEL System flaws that may remain dormant until triggered by specific events. Often created by organizational or managerial decisions Accident Causation In Complex Systems (e.g., poor safety culture, inadequate Introduce the concept that accidents in procedures). aviation result from multiple factors. Example: A poorly designed checklist that Mentions how these factors lead to allows for errors. successive breaches in system defenses. Example: "Just like layers of Swiss cheese, Feature Active Failures Latent Failures each defense layer has potential 'holes' or weaknesses." Timing Immediate Dormant for a long time Associated Front-line Higher-level The Swiss-Cheese Model By James Reason with personnel management Accidents are caused by successive (Pilots, and breaches of system defenses. ATC, Engineers, organizational etc.) decisions. "Single-point failures rarely cause accidents in well-defended systems like aviation.“ Examples Pilot error, Air Poor safety traffic control culture, miscommunicati inadequate on, maintenance procedures, poor equipment Visibility Easily visible Hidden until a post incident. triggering event. Potential Direct and Builds over time, Risk immediate risk high-risk when triggered. Latent Conditions Active And Latent Failures Latent conditions may seem harmless until circumstances align and expose their risk. Active Failures: Latent conditions can include: Errors or violations with immediate negative consequences. Typically associated with ○ Poor design choices. front-line personnel (e.g., pilots, air traffic ○ Conflicting organizational priorities controllers). (e.g., safety vs. cost). Example: A pilot failing to notice a system ○ Lack of safety oversight. warning. ○ Swiss-Cheese Model Insufficient Communication: Safety concerns raised by employees are ignored Explain how safety defenses are put in place to by management, causing mistrust within the protect against human and technical failures. organization. Highlight that defense layers may include: Example: Procedures. Air France Flight 447 (AF447/AFR447) was a scheduled international passenger flight from Rio Training. de Janeiro, Brazil, to Paris, France. On 1 June Technology. 2009, inconsistent airspeed indications and Management decisions. miscommunication led to the pilots inadvertently stalling the Airbus A330. They failed to recover the plane from the stall, and the plane crashed into the mid-Atlantic Ocean at 02:14 UTC, killing all 228 passengers and crew on board. Latent conditions: Lack of proper stall training for pilots. Active failure: Pilot’s inability to recover from the stall. Organizational Accident The Organizational Accident occurs when Organizational Factors multiple failures align across different levels of an organization, leading to an accident. Breaches in system defenses can often be traced back to poor decisions at the organizational level. It combines both latent conditions (long-standing system weaknesses) and Example: active failures (immediate errors made by If an organization prioritizes cost-cutting over front-line personnel). safety, latent conditions can be introduced into the Reason’s Model shows that no single failure system. causes accidents, but rather the breakdown of multiple safety defenses. Organizational Failure: The organizational accident can be understood as Poor Safety Culture: The airline prioritizes cost having five key building blocks. reduction over safety procedures, pressuring staff to cut corners on safety checks. These include Organizational Processes such as policy-making, planning, communication, Latent Conditions Created: resource allocation, and supervision. Inadequate Training: Due to budget cuts, new pilots are not trained on updated Two critical processes for safety: emergency procedures. Resource Allocation: Are enough Improper Maintenance: Engineers are given resources allocated to safety? insufficient time to perform thorough checks, Communication: Are safety concerns leading to hidden faults. effectively communicated throughout the organization? ○ Training and qualifications: Are employees adequately trained for emergency situations? ○ Morale and management credibility: Poor management practices lower morale, increasing error likelihood. ○ Ergonomics: Factors like poor lighting, noise, or uncomfortable working conditions impact performance. These conditions lead directly to active failures, such as errors or violations. Latent Conditions Pathway Latent conditions are hidden within the Latent Condition Defences system and become dangerous over time. They are often the result of poor Aviation systems are equipped with organizational decisions. defences that serve to prevent latent conditions from leading to accidents. Examples of latent conditions: Three main types of defenses: ○ Poor equipment design that leads to failures. ○ Technology: Ensuring all equipment is up to standard and designed with ○ Incomplete or incorrect Standard safety in mind. Operating Procedures (SOPs) that confuse staff. ○ Training: Providing staff with the right knowledge and skills to prevent ○ Inadequate training of personnel on errors. critical tasks. ○ Regulations: Strong regulatory Clusters of Latent Conditions: oversight to ensure compliance and proper safety protocols are followed. 1. Inadequate Hazard Identification: Risks are not fully identified or addressed, Strengthening these defenses is crucial to allowing hazards to remain in the system. prevent latent conditions from evolving into active failures. 2. Normalization of Deviance: Small rule-breaking behaviors (shortcuts) become normalized because they seem to work Active Failure: Errors Vs. Violations without consequence. Active failures are actions by front-line personnel that directly contribute to accidents. They can be Workplace Conditions Pathway classified into errors and violations. Workplace conditions refer to the immediate environment and factors that affect the Errors: Unintentional mistakes made by workers efficiency and safety of front-line workers. trying to follow procedures but failing. Examples of workplace conditions include: ○ Workforce stability: High turnover Example: A pilot mistakenly enters the wrong may lead to less experienced staff coordinates into the navigation system. handling critical tasks. Violations: Intentional deviations from procedures Key Assumptions In System Design or rules. Initial Assumptions: During the design Example: A maintenance engineer knowingly skips phase of systems like air traffic control or a step in the inspection process to save time. flight operations, several fundamental assumptions are made: The main difference between errors and violations 1. Technology: Assumes that the is intent: errors are mistakes, while violations are technology will work as designed to deliberate rule-breaking. achieve production goals. 2. People: Assumes individuals are Workplace Conditions Pathway well-trained to operate the technology. The key to preventing organizational accidents lies 3. Regulations: Assumes established in monitoring organizational processes to detect procedures and regulations will and manage latent conditions. guide both the system and human behavior. Safety management efforts must focus on: Baseline Performance: The system's 1. Strengthening organizational processes to performance is expected to follow a stable, improve resource allocation and predictable course from deployment to communication. decommissioning, illustrated as a straight 2. Enhancing workplace conditions to reduce line. the likelihood of active failures. Safety breakdowns happen when latent conditions How Practical Drift Occurs In Operations combine with active failures. Therefore, both must Despite careful design, real-world be addressed in safety strategies. operations create a “practical drift” from baseline performance. This drift happens because the system interacts with complex Practical Drift and evolving environments, technology, and Practical drift, as defined by Scott A. Snook, human behavior. refers to how system performance gradually Causes of Practical Drift: deviates from its original design due to unforeseen circumstances in daily 1. Technology might not work exactly operations. as predicted. In aviation, practical drift explains how 2. Procedures may not be feasible procedures and system operations shift under certain operational conditions. away from their ideal state over time. 3. Regulations may not cover all This drift occurs because the organization’s real-life scenarios. initial processes and procedures cannot predict every situation that operational personnel will encounter. What Is The Shell Model The SHELL Model is a conceptual tool designed to analyze human interactions with other system components. The term "SHELL" is an acronym for four key components: Software (S), Hardware (H), Environment (E), and Liveware (L). The model provides insight into the complex relationships that exist between humans and their working environments. Liveware, or humans, is at the center of the Managing And Mitigating Practical Drift model, indicating that humans are central to Potential for Learning: Analyzing practical the aviation system's safety and drift helps identify successful safety performance. adaptations and enables prediction and control of safety risks. Timely Intervention: Capturing drift information early on allows for hazard prediction and potential system redesign. Unchecked Drift: When adaptations go unchecked, the system may stray too far from baseline performance, leading to increased risk of incidents or accidents. PEOPLE, CONTEXT, AND SAFETY Overview Of People, Context, And Safety Complex Aviation System: The aviation industry is a highly complex system, incorporating both product and service providers, as well as state organizations. Understanding Liveware Human Contribution to Safety: Central Role of Humans: Humans are at the Understanding how humans interact with core of the aviation system, and their various components of the system is critical performance can vary widely based on in assessing safety and managing human individual skills, adaptability, and performance. experience. Impact of Interrelated Components: Unique Challenges: Unlike machines, Humans interact with a complex web of humans are not standardized, so their procedures, equipment, and environments, interaction with other system components and the efficiency of these interactions may lead to inconsistencies or errors. influences overall system safety. Avoiding Mismatches: System components must be matched to human abilities to avoid creating stress or performance degradation. Liveware – Software Interface Liveware – Liveware Interface The L-S interface focuses on the interaction The L-L interface deals with the between humans and support systems like relationships between people in the work manuals, checklists, standard operating environment, such as flight crews, air traffic procedures (SOPs), and software. controllers, and engineers. Challenges: Issues like clarity of Communication and Group Dynamics: Team regulations, ease of use of manuals, and communication and group dynamics are the accuracy of procedures affect how essential to operational success, and the effectively humans can work within the quality of interpersonal skills impacts system. performance. Impact on Performance: Poorly designed Importance of CRM: The rise of Crew software or ambiguous procedures can Resource Management (CRM) highlights create confusion, reducing overall the importance of managing human operational efficiency. interactions in aviation to prevent errors. Liveware – Hardware Interface Dangers Of Mismatches The L-H interface refers to the relationship According to the SHELL Model, mismatches between humans and physical equipment between humans (Liveware) and other like aircraft, machines, and facilities. components like software, hardware, and environment contribute to human error. Human Adaptation: People tend to adapt to mismatches between themselves and Mitigating Mismatches: Assessing these machines, but these mismatches can mask interactions and ensuring that humans have serious issues that become apparent only the tools, support, and environment they after incidents. need is critical to reducing errors and enhancing safety. Importance of Design: Properly designed hardware must accommodate human Continuous Improvement: Aviation limitations, reducing the risk of operational organizations must regularly evaluate and errors. adjust system components to better align with human capabilities and prevent errors. Liveware – Environment Interface Examples: The L-E interface encompasses both the internal workplace environment (lighting, “During a routine flight, the autopilot system temperature, noise) and the external malfunctioned, causing the aircraft to deviate from environment (weather, terrain). its flight path. The pilots tried to correct the issue Influence on Performance: Factors such as but were unfamiliar with the new autopilot system.” fatigue, stress, and physical comfort can all Ans: Liveware-Hardware (L-H) Interface impact human performance and decision-making. Workarounds: When conditions are "The pilots were given outdated flight charts due to suboptimal, humans tend to develop a delay in updating the flight manuals. This led to workarounds, which may lead to deviations confusion during the flight." from standard procedures and increase Ans: Liveware-Software (L-S) Interface safety risks. "The maintenance team, due to resource Ans: shortages, skipped several checklist items, leading Primary Interface: Liveware-Software (L-S) to an undetected issue with the aircraft engine. The Interface pilots later struggled to handle the engine failure Secondary Interfaces: Liveware-Liveware (L-L) during the flight." Interface, Liveware-Hardware (L-H) Interface Ans: Primary: Liveware-Liveware (L-L), Secondary: Liveware-Environment (L-E). ERRORS AND VIOLATIONS “During a flight, the pilots received conflicting Effective SMS implementation requires a mutual instructions from air traffic control due to poor understanding between the service provider and communication between the ground and air teams.” the State. Ans: Liveware-Liveware (L-L) Interface Key Difference: The main distinction between errors and violations lies in intent: "The temperature inside the cockpit was extremely Errors: Unintentional acts that deviate from hot due to an air conditioning malfunction, causing expectations. the pilots to feel fatigued and distracted." Ans: Liveware-Environment (L-E) Interface Violations: Deliberate deviations from established procedures, protocols, or practices. "During an inflight emergency, the cabin crew struggled to follow the prescribed emergency Impact of Errors and Violations: procedures because the manual was outdated and Both can result in non-compliance with did not reflect recent changes to the aircraft layout. regulations and approved procedures. Additionally, communication between the crew and the passengers was hampered due to a language Responses to non-compliance should barrier, causing panic among passengers. Despite consider whether it was caused by an error their training, the crew could not efficiently adapt to or a violation before punitive actions are the situation.” taken. Ans: Focus on determining whether the act Primary Interface: Liveware-Software (L-S) stems from willful misconduct or gross Interface negligence. Secondary Interface: Liveware-Liveware (L-L) Interface Errors An action or inaction that deviates from intended “As an aircraft approached the runway for landing, outcomes. Errors occur despite training, its automated landing system failed to function regulations, and the level of technology used. properly due to a software bug that had not been identified during system testing. The pilots, trained Types of Errors: on the manual landing procedure, had to switch Slips and Lapses: Failures in executing an quickly but experienced difficulty adapting to the intended action. situation. The bug had been reported months ○ Slips: Unplanned actions (e.g., using earlier, but no action had been taken to correct it, the wrong lever). and the relevant communication had not reached ○ Lapses: Memory failures (e.g., the operations team.” forgetting a checklist item). Mistakes: Failures in planning, even if the This may involve the weakening of safety defences execution was correct, the intended to achieve productivity goals. outcome would still not be achieved. Consequences: If not managed carefully, these violations increase risks within the system. Errors : Control Strategies Error Reduction Strategies: Direct interventions to Mitigation: Proper safety assessments eliminate factors that contribute to errors. should be conducted to evaluate whether the violation can be incorporated into Examples: Improving ergonomics, reducing accepted procedures without compromising distractions. safety. Error Capturing Strategies: Assume errors will happen and aim to capture them before SAFETY CULTURE consequences occur. Safety culture refers to the collective beliefs, Examples: Use of checklists, procedural values, biases, and behaviors shared by the interventions. members of an organization or group. It influences how members perceive safety, approach risk, and Error Tolerance Strategies: Design systems to collaborate on hazard reporting and mitigation. tolerate errors without major consequences. Key Components: Examples: Redundant systems, multiple inspection Organizational, professional, and national processes. cultures. Each of these can impact safety Violations performance, reporting behaviors, and the A deliberate deviation from regulations, procedures, implementation of risk controls. or norms. Not all violations are malicious or result in adverse consequences. Safety Culture : Elements Types of Violations: Trust and Respect Situational Violations: Committed in response to specific contexts, such as time A positive safety culture relies on trust pressure or high workload. between management and personnel. Both groups must feel secure in openly Routine Violations: Become normalized discussing and addressing safety concerns. within work groups as a way to overcome practical difficulties. Referred to as "drift," Continuous Vigilance these deviations may become frequent and lead to severe consequences if not The organization must actively seek addressed. improvements, remain aware of hazards, and utilize systems for monitoring and An extension of routine violations, often driven by investigating safety risks. organizational pressure to meet increased output demands. Collaboration Factors Affecting Organizational Culture Successful safety cultures are supported by Business Policies and Management Attitudes collaboration between management, Organizational policies, including those that frontline staff, and regulatory authorities. balance safety with productivity or financial goals, Shared commitment to safety and can either support or hinder a positive safety confidence in the safety system is crucial. culture. Training and Motivation Safety Culture: Organizational Culture Ongoing training and motivational efforts are Refers to the values and behaviors that shape how essential in ensuring that employees uphold safety members interact and perform tasks within an standards and are willing to report hazards. organization. Management’s Role Sets the boundaries for what is considered Senior management must actively support safety acceptable behavior, decision-making, and improvements, create a culture of responsibility, prioritization in areas such as safety vs. efficiency. and ensure that personnel have the tools to protect themselves and report issues without fear. Influence on Safety Culture: The culture within an organization can EXAMPLES: influence interactions between senior and Business Policies and Management Attitudes: junior staff, teamwork, and the degree to In an aviation company, business policies that which safety information is shared with balance safety with financial or productivity goals authorities. can greatly impact safety culture. For instance, if an airline prioritizes reducing operational costs, it It also affects how personnel react under might implement policies that focus on cutting down pressure and their willingness to embrace maintenance time to increase aircraft utilization. safety technologies. This could unintentionally create a culture where safety is seen as secondary to productivity, potentially pressuring ground crews to rush through EXAMPLES: inspections or pilots to minimize turnaround time Values and Behaviors: At an airline, the between flights. organizational culture might prioritize on-time performance as a key value. In a highly Management’s Role: Senior management’s efficiency-focused culture, there could be a subtle commitment to safety is pivotal in fostering a pressure to prioritize meeting flight schedules over positive safety culture. If the airline’s leadership strictly adhering to all safety protocols, especially consistently communicates that safety is during minor delays. This pressure may lead to non-negotiable and invests in necessary resources cutting corners, like speeding through pre-flight (like state-of-the-art safety tools and equipment), checklists or rushing maintenance tasks. employees are more likely to follow safety protocols. Interaction between Senior and Junior Staff: In an airline where senior staff (like captains) are encouraged to mentor and listen to their junior Safety Culture: Professional Culture counterparts, there will likely be open discussions Professional Groups about safety concerns. Junior pilots might feel Safety cultures differ among professional groups, empowered to raise safety issues or suggest such as pilots, air traffic controllers, and go-arounds during adverse weather conditions, maintenance engineers, each of which has its own knowing that their input will be valued. value system and behavior patterns. Collaborative Safety Efforts Authority and Decision-Making: Effective safety performance requires collaboration Cultures with high respect for authority (e.g., between different professional groups. Japan) may discourage questioning senior officers, Professionals should distinguish between safety potentially impacting safety reporting. In contrast, performance issues and industrial or contractual egalitarian cultures (e.g., Scandinavia) encourage issues. open communication and reporting. Pilots and Air Traffic Controllers (ATC): Regulatory Enforcement: Strict enforcement in countries like the U.S. and Pilots prioritize safety through adherence to U.K. ensures strong safety practices. In nations protocols and checklists, making decisions with limited resources, enforcement may be more like diverting flights during emergencies. relaxed, impacting compliance. ATC personnel ensure safe aircraft separation and guide planes through Safety Reporting Practices: congested or adverse conditions, often Individualistic cultures (e.g., U.S.) promote making real-time critical decisions. proactive reporting, while collectivist cultures (e.g., China) may avoid reports that could negatively Maintenance Engineers: affect team or organizational image. Focus on preventive maintenance and thorough inspections, even if it delays National Priorities and Resources: flights, prioritizing long-term safety over Wealthier nations prioritize investments in aviation short-term efficiency. safety, while countries with fewer resources might focus on operational needs, potentially Civil Aviation Authority (CAA) Personnel: compromising safety standards. Responsible for regulatory oversight, Legal Systems and Accountability: enforcing strict safety and legal standards Strong legal frameworks (e.g., Canada) enforce during inspections, ensuring compliance for accountability in safety violations, while less overall aviation safety. developed legal systems may hinder proper enforcement and oversight. Safety Culture: National Culture National Culture Defined Promotion And Assessment Refers to how different societies view Safety Culture Assessment individual responsibility, authority, and Organizational safety culture (OSC) resource allocation, all of which shape assessments are valuable tools for safety management practices. measuring and monitoring the effectiveness of a safety culture. Impact on Safety They can be enhanced with sector-specific National culture affects regulatory policies, organization risk profile (ORP) assessments enforcement methods, and how to address industry-specific conditions. safety-related information is treated. Cultural differences can shape safety risk Promotion of Safety Culture perceptions and require adjustments to Establishing safety awards or promotional communication and leadership styles. schemes can incentivize organizations to voluntarily assess and improve their safety cultures, ensuring continual progress in safety performance.

Use Quizgecko on...
Browser
Browser