QAQC3200 (Theory) Handout - Higher College of Technology PDF

Summary

This document is a handout for a course on Quality Assurance and Quality Control (QAQC3200) at the Higher College of Technology in Oman. It provides a general overview on quality concepts and includes examples, figures, and tables. It also introduces Total Quality Management (TQM) and its building blocks.

Full Transcript

SULTANATE OF OMAN MINISTRY OF MANPOWER HIGHER COLLEGE OF TECHNOLOGY Department: Applied Sciences Department Section: Environmental Sciences Section Specialization: Environmental Sciences...

SULTANATE OF OMAN MINISTRY OF MANPOWER HIGHER COLLEGE OF TECHNOLOGY Department: Applied Sciences Department Section: Environmental Sciences Section Specialization: Environmental Sciences Level: Advanced Diploma Quality Assurance & Quality Control (QAQC3200) HoS/HoU Dr. Suad Al Kindi For internal circulation and educational purposes only Version No: 1 Last Date of Revision: Sept. 2017 Semester No./Academic Year: Sem. 1, 2017 - 2018 Page 1 of 86 Quality Assurance and Quality Control 1. Quality: Basic Considerations 1.1. Introduction: If you ask the question “What is quality?” to different people, you will discover that those different people have different answers. This is of course not because the question is very general or hard to answer, but merely that different people have different ideas about what is quality. Historically, the word ‘quality’ is believed to be originated or invented by Plato1. The word quality is used by most educated people every day of their lives, yet in order that we should have this simple word Plato had to make the tremendous effort (it is perhaps the greatest effort known to man) of turning a vague feeling into a clear thought. He invented a new word ‘poiotes’, ‘what- ness’, as we might say, or ‘of-what-kind-ness’, and Cicero translated it by the Latin ‘qualitas’, from ‘qualis’1. Nowadays, the accepted modern definitions of quality are those suggested by different quality experts1 as: - ‘Quality is fitness for use’, Joseph Juran. - ‘Quality is the best for certain customer conditions’, Armand Feigenbaum. - ‘Quality is meeting or exceeding customer expectations’, Parasuraram. - ‘Quality is synonymous with excellence’, a transcendent approach definition. - ‘Quality is the degree to which a product possesses a specified set of attributes necessary to fulfill a stated purpose’, ISO8402 standard definition and it is a product based approach. - ‘Quality is conformance to engineering and manufacturing requirements’, Philip Crosby and it is a manufacturing based approach. However, a more general and acceptable definition that we will adopt is: “Quality is a measure of a product’s or service’s ability to satisfy the customer’s stated or implied needs”2. In our business, we produce products like drug products, electronic instruments, vehicles…etc, or services like health care, education, airline transportation…etc. However, to understand the ideas of Quality assurance and Quality control, we will discuss the industrial production of some product. Any industrial process can be represented as in Figure 1.1 below. Page 2 of 86 Figure 1.1: Schematic representation of an industrial process. Any product should have a predetermined specification which is those aspects of the product that make the customer satisfied. For example, the specification of a car to be used in Oman should contain an efficient cooling system, air conditioning system…etc. Quality control (QC) testing should be done to confirm that the product conforms to its specification. However, QC testing is not concerned with production system, therefore, if during the manufacturing of an item, the temperature rose to 60 C where it should be 35 C, no QC action can be done as the product still conforms to the pre-determined specification. It should be noted here that although the finished product (or end product) conforms to the specification, it did not manufactured correctly. Therefore, in order to control the whole process (including the production system, QC testing, specifications…etc), Quality Assurance (QA) system was introduced. Accordingly, QC and QA can be defined as2: Quality Control: “The operational techniques and activities used to fulfill requirements for quality (specification)”. Quality Assurance: “All the planned and systematic activities (proactive and retrospective) implemented within the quality system that provides confidence that a product or service will fulfill requirements for quality”. Any successful organization or establishment should have a will defined quality system, a clear quality policy, a well defined quality objectives, and a good quality planning. These terms can be formalized as follows2: Page 3 of 86 Quality system: “Formalized business practices that define management responsibilities for organizational structure, processes, procedures, and resources needed to fulfill product/service requirement, customer satisfaction, and continual improvement”. It includes QC, QA, product design…etc. Quality policy: “A statement of intentions and direction issued by the highest level of the organization related to satisfying customer needs”. Each good organization should have a clear vision, mission, and quality policy. Question: What is the vision and mission of Higher College of Technology (HCT)? Quality objectives: “Specific measurable activities or processes to meet the intentions and directions as defined in the quality policy”. Quality planning: “A management activity that sets quality objectives and defines the operational and/or quality system processes and the resources needed to fulfill the objectives”. Finally, and before moving to another topic, it is very important to understand who is the customer? The customer is a person or organization (internal or external) that receives a product or service anywhere along the product’s life cycle. The external customer is the customer of the end product/service, while the internal customer is the one that receives any part of the product life cycle within the organization itself. For example, in an industrial company, the warehouse department is the internal customer of the goods produced by the production department1. 1.2. Overview on Total Quality Management (TQM): Total Quality Management is the system of activities directed at achieving delighted customers, empowered employees, higher revenues, and lower costs3. It is a way of managing people and business processes to ensure complete customer satisfaction at every stage, internally and externally. The building blocks of TQM: Processes, people, management, systems, and performance measurement4. Everything we do is a process, which is a collection of interacting components that transform inputs into outputs toward a common aim, the mission statement5. Page 4 of 86. The inputs can include action, methods, and operations while the outputs should satisfy the customer’s needs and expectations. The process should continually be improved according to the feedback of the customer or the process itself (Figure 1.2). Figure 1.2: The building blocks of TQM. 1.3.Cost of Quality (COQ): The “cost of quality” is the cost of not creating a quality product or service. Examples: - Rework of a service, e.g. replacement of a food order. - Correction of a bank statement. - Rework of a manufactured item. - Retesting of an assembly. Allusions to quality costs first appeared in the 1930s in the work of Shewhart and to a lesser extent Miner and Crocket. Formalization of the concept of cost of quality developed in the 1950s from the work of Joseph Juran, Armand Feigenbaum and Harold Freeman. Armand Feigenbaum, in his 1956 article which launched the terminology of total quality control, discussed three categories of quality costs: Prevention costs, Appraisal costs, and Failure costs. He argued that Page 5 of 86 “the ultimate end result is that total quality control brings about a sizable reduction in overall quality costs, and a major alteration in the proportions of the three cost segments.” Philip Crosby’s publication of Quality is Free in 1979 provided probably the biggest boost to popularizing the COQ concept beyond the quality profession. In that work, Crosby wrote that “quality is measured by the cost of quality which is the expense of nonconformance-the cost of doing things wrong”. The term cost of quality (COQ) was coined by Crosby6. Total quality costs can be incurred by3, 6: 1- Prevention costs: costs of all activities specifically designed to prevent poor quality in products or services. Examples: - New product review. - Quality planning. - Process capability evaluation. - Quality training. 2- Failure costs: costs resulting from products or services not conforming to requirements or customer needs. They are of two types: - Internal failure costs: occurring prior to delivery or shipment of the product, or the furnishing of a service to the customer, e.g. the costs of rework, retesting, and re- inspection. - External failure costs: occurring after delivery or shipment of a product or furnishing of a service to the customer, e.g. the costs of processing complaints, customer returns, warranty claims, and recalls. 3- Appraisal costs: costs associated with measuring, evaluating, or auditing products or services to assure conformance to quality standards and performance requirements. Examples: - Test of purchased material. - In-process and final testing. - Calibration of measuring equipment. An example of a study for a tire manufacturer is shown in Table1.1 below. Some of the conclusions obtained from the study were as follows: - The total of about $ 900,000 per year is large. Page 6 of 86 - 79.1% of the total cost is concentrated in failure costs. - Only 4.3% of total cost is spent on prevention. As a result of this study, management decided to increase the budget for prevention activities. Table 1.1: Annual quality cost analysis of a tire manufacturer 3. Type of quality cost Cost ($) Percentage (%) Cost of quality failures-losses Defective stock 3,276 0.37 Repairs to product 73,229 8.31 Collect scrap 2,288 0.26 Waste-scrap 187,428 21.26 Consumer adjustments 408,200 46.31 Downgrading products 22,383 2.59 Total $ 697,259 79.10 % Cost of appraisal Incoming inspection 32,655 3.70 Inspection 1 32,582 3.70 Inspection 2 25,200 2.86 Spot-check inspection 65,910 7.37 Total $147,347 16.61 % Cost of prevention Local plant quality control 7,848 0.89 engineering 30,000 3.40 Corporate quality control $ 37,848 4.29 % engineering $ 882,454 100.00 % Total Grand total Page 7 of 86 2. Basic Quality Tools: The basic quality tools or the basic seven quality tools are used to analyze and solve problems in business. The basic seven quality tools that will be discussed are: flowcharts, Ishikawa fishbone diagrams, check sheets, histograms, Pareto charts (analysis), quality control charts, and scatter diagrams. 2.1. Flowcharts 2.1.1. Definition and Overview: Flowcharts or process mapping tools have been identified as a quality tool for many years. Flowcharts graphically represent the steps in creating a product or service. There are three basic types of flowcharts: a. Block diagrams7: a tool for mapping the basic requirements of a procedure. Each block represents each of the individual procedure requirements. Example: Measuring the pH of a phosphate buffer solution b. Flowchart (detailed) 5, 8, 7, 9, 10: usually constructed using standard symbols. The symbols used in the flowchart describe the “what” and the “when” as shown by the chart sequence. The most important standard symbols are shown below, which are part of the ANSI standard symbols (ANSI: American National Standards Institute). Page 8 of 86 Examples: - Figure 2.1 shows a flowchart for debugging a broken lamp. - Figure 2.2 shows a flowchart for a pH measurement of a solution. Figure 2.1: Flowchart for debugging a broken lamp. Page 9 of 86 Figure 2.2: Flowchart for a pH measurement of a solution. 2.1.2. Uses of flowcharts 3, 9: - A flowchart documents a system in a simple manner. Any one can easily examine and understand the system (documentation tool). - A flowchart provides an overview of the system. The critical elements and steps of the process are easily viewed in the context of a flow chart. - A flowchart functions as a communication tool for establishing a process among a diverse workforce. It provides an easy way to convey ideas between engineers, managers, hourly personnel, vendors, and others in the extended process. - A flowchart functions as a planning tool. Designers of processes are greatly aided by flow charts. They enable a visualization of the elements of new or modified processes and their interactions while still in the planning stages. - A flowchart functions as a training tool for operators to learn a particular way to work. - A flowchart can be used to reduce errors due to operator inconsistencies. Page 10 of 86 - A flowchart facilitates troubleshooting and used as a diagnostic tool. Problems in the process, failures in the system, and barriers to communication can be detected by using a flowchart. 2.1.3. Instructions for developing a flowchart 9, 10: 1- Determine the start and stop points. 2- List the major steps in the process. 3- Put the steps in the proper order and draw the flowchart. 4- Test the flowchart for accuracy and completeness, which better to be done by other person. 5- Compare the final actual flow with best possible flow. 6- Look for possibilities to improve the process by reducing the non-value-added activities. The non-value-added activities are those activities that do not contribute to the quality of the end product or service and, therefore, they may cause the waste time and money. Examples of the non-value-added activities that can be discovered through flowcharts are: too much inspection, delay times, and many unreasonable movements10, 11. The reasons for the non-value-added activities may include12: - Process output failed to meet customer requirements. - Inefficiency elsewhere within the process. 2.2. Ishikawa Diagrams 2.2.1. Definition and Overview: Ishikawa diagrams are invented by Dr. Kaoru Ishikawa in 1943. He was a professor at the university of Tokyo and later president of the Musashi Institute of Technology, Tokyo. They are also called Fishbone charts, after their appearance, and Cause-and-effect diagrams, after their function3, 8, 13. Page 11 of 86 2.2.2. Purpose and Usage: - Identify factors causing an undesired effect (problem). - Identify factors needed to bring about a desired result, e.g. winning a proposal. 2.2.3. Procedure: - Write the problem (defect) in a box to the right and draw a horizontal line running to it. - As a branches from the main line (arrow), write the main categories (major factors) which, as a starting point, may be taken as the:  Four M’s13: Method, Manpower, Material, and Machinery (for production manufacturing process). More modern selection of categories used in manufacturing includes: Equipment, Process, People, Material, Environment, and Management.  Four P’s13: Policies, Procedures, People, and Plant or location (including equipment and space) (for service process).  Process steps: major process steps are used. - Other factors like measurement and environment may be considered. - Factors can be subdivided as useful. - Ask ‘Why?’ for each cause several times, 5 why’s if necessary. The “5 Why’s approach” is only arbitrary13, 14, 16. Examples 2.2.1: Figure 2.3 below shows the Ishikawa diagram for the analysis of the waiting time in a doctor’s office. Note how asking ‘why?’ several times can add more ideas to each branch15. Page 12 of 86 Figure 2.3: Ishikawa diagram for the analysis of waiting time in a doctor’s office15. Example 2.2.2: Figure 2.4 below shows the Ishikawa diagram for the analysis of the problem of iron in some product13, 14. Note the use of more categories than the 4 M’s. Figure 2.4: Ishikawa diagram for the analysis of iron contamination in some product13, 14. Page 13 of 86 2.3. Check sheets 2.3.1. Definition and Overview: The check sheet is a structured prepared form for collecting and analyzing data, after determining the root cause by Ishikawa diagrams3, 17. 2.3.2. Purposes and applications of check sheets (checklists): 1- Gathering and structuring data as in the following cases17:  Collecting data repeatedly by the same person or at the same location.  Collecting data on the frequency of events, problems, defects, defect location, defect causes…etc  Collecting data from a production process. 2- Verifying a hypothesis. Data can be visible or invisible. Visible data are quantifications of product or process characteristics, as either attribute (good or bad items) or variables (e.g. size, length, weight…etc). Visible data are collected with check sheets. Invisible data are unknown and/or unknowable i.e. they can not be quantified like the cost of an unhappy customer or the benefit of a proud employee 3. 2.3.3. Procedure17: - Decide the event or problem and develop operational definitions. - Decide when data will be collected and for how long. - Decide the form and record data by ×’s, checks…etc. - Determine the time period for data collection. - Pilot the check sheet to determine the ease of use and reliability of results and modify the check sheet if necessary. Page 14 of 86 2.3.4. Types of check sheets3, 18, 19: Check sheets can be classified according to the types of data collected or according to the purpose of the check sheet. According to the type of data collected, check sheets are: - Attribute check sheet: designed for gathering data about defects in a process. - Variables check sheet: designed for collecting information about variables, such as size, length, weight, and diameter. However, check sheets can be classified according to their purpose as; - Distribution check sheets: used to collect data in order to determine how a variable is dispersed within an area of possible occurrences. - Location check sheets: used to highlight the physical location of a problem/defect in order to improve quality. - Cause check sheets: used to keep track of how often a problem happens or records the cause to a certain problem. - Classification check sheets: used to keep track of the frequency of major classifications involving the delivery of products or services, e.g. we want to know how many of each product type truck passes through the weigh stations. Example 2.3.1: A book store consistently achieved lower sales per day than budgeted. A wide range of possible causes surfaced and the data collected were as shown in Table 2.1 below. The results clearly show the main reasons for the low sales per day19. Table 2.1: The possible causes, with their frequencies in two weeks, for the low sales of a book store. Cause of no purchase Week 1 Week 2 Total Could not find item ××××××××××××××××× ×××××××××××××××××××× 37 No offer of help ×××××××××××××× ×××××× 20 Item sold out ×× ××× 5 Prices too high × 1 Poor lighting ××××××× ×××××××××××× 19 No place to sit ×× ×××× 6 Total 43 45 88 Page 15 of 86 Example 2.3.2: A bicycle manufacturing company received a lot of claims regarding the bicycle paints. The main reasons for nonconformities were scanned and the results obtained after checking 2217 items were as shown in Table 2.2 below. The results clearly show the main reasons for the painting nonconformities. Table 2.2: The main reasons for paint nonconformity, with their frequencies out of 2217 items checked, for a bicycle manufacturing company. Nonconformity type Number of nonconformities Total Blister ××××××××××××××××××××× 21 Light spray ×××××××××××××××××××××××××××××××××××××× 38 Overspray ××××××××××× 11 Rusts ××××××××××××××××××××××××××××××××××××××××××××××× 47 Others ××××× 5 Total 122 2.4. Histograms 2.4.1. Definition and Overview: Histograms are specialized type of bar chart which graphically summarize and display the distribution and variation of a process data set. The histogram is a frequency distribution that shows how often each different value in a set of data occurs20. 2.4.2. Applications: - Presenting data to determine which causes dominate. - Understanding the distribution of occurrences of different problems, causes, frequencies…etc. 2.4.3. Construction of a histogram21: - Gather and tabulate data on a process, product, or procedure; e.g. time, size, frequency of occurrences…etc. - Calculate the range, R = maximum – minimum. Page 16 of 86 - Decide the number of classes (bars), K; usually 6-15 classes. - Determine the fixed width of each class, i = R/K. The width, i, is preferably to be a nice number without fractions, e.g. 10 instead of 9.6. - Create a table for the categories. - Plot the frequency on the y-axis and the interval of categories on the x-axis. Important notes: - Use intervals of equal lengths (equal categories). - Show the entire vertical axis beginning with zero. - Do not break either axis. - Use a number of categories, or classes, of about 5 – 15. 2.4.4. Histogram symmetry: Symmetrical histograms results from a normal bell-shaped distributions. The symmetry of histograms is described through “skewness” and “kurtosis”, where symmetrical histograms would have zero skewness and zero kurtosis. Skewness means lack of symmetry of a set of data, therefore, a skewed distribution is one that has a long tail in one direction, where the curve is said to be right, or positively, skewed if the tail of the curve extends to the right, and said to be negatively skewed if the tail extends to the left. A numerical measure of skewness is the Pearson’s coefficient of skewness which is defined as5 3( x  M e ) Skewness  s Where x is the mean, Me is the median, and s is the standard deviation. Kurtosis means peakedness and it is a measure of the length of the tails of the distribution. A symmetrical distribution with positive kurtosis indicates a greater than normal distribution of product in the tails, while negative kurtosis indicates shorter tails than a normal distribution would have3, 20. Page 17 of 86 2.4.5. Typical histogram distributions: 1- Normal distribution: a bell-shaped curve with the most frequent measurement appears at the center of distribution (Figure 2.5). It indicates that a process is running normally with only common causes present20, 21. Figure 2.5: Shape of a normal distribution histogram. 2- Skewed distribution: The spread of measurements is lower in one side of the central tendency than on the other (Figure 2.6). Moving the central tendency in the direction of smaller variance is unlikely unless the process is radically changed20, 21. Figure 2.6: Shape of a skewed distribution histogram. 3- Bi-modal distribution (Double-peaked): It indicates that the measurements are not from a homogeneous process, since there are two peaks indicating two central tendencies (Figure 2.7). There are two (or more) factors that are not in harmony like: 2 machines, 2 shifts, mixed outputs of two suppliers…etc. Since at least one of the peaks must be off-target, there is evidence here that improvements can be made20, 21. Page 18 of 86 Figure 2.7: Shape of a bi-modal distribution. 4- Cliff-like: Appears to end abruptly at one end. It is a truncated curve with the peak at the edge while trailing gently off to the other side (Figure 2.8). It often means that part of the distribution has been removed through screening, 100 % inspection, or review20, 21. Figure 2.8: Shape of a cliff-like distribution. 5- Plateau distribution (Figure 2.9): Often means that the process is ill defined to those doing the work, leaving everyone on their own20, 21. Figure 2.9: Shape of a plateau distribution. 6- Saw-toothed distribution (Comb distribution): Appears as an alternating jagged pattern (Figure 2.10). Often, it indicates a measuring problem, e.g. improper gauge reading20, 21. Page 19 of 86 Figure 2.10: Shape of a saw-toothed (comb) distribution. 2.4.6. Batch performance monitoring using histograms: In industry, the performance of batches can be monitored through the conformance to specifications of different batches. In this case, specification limits can be displayed on a histogram, i.e. the target value, the lower limit, and the upper limit (Figure 2.11). By overlaying specification limits on a histogram, we can estimate how the process performed during the period of data collection. If the histogram shows that process output is wider than specification limits, then it is not capable of meeting the pre-determined specifications and the variation of the process should be reduced20. Figure 2.11: Overlaying of target value and specification limits on a histogram. Example 2.4.1: The data below are the spelling test scores for 20 students on a 50 word spelling test. The scores are: 48, 49, 49, 46, 47, 47, 35, 38, 40, 42, 45, 47, 48, 44, 43, 46, 45, 42, 43, and 47. Construct a suitable histogram for the data, determine the Pearson’s coefficient of skewness, and explain the histogram shape. Solution: The largest number is 49 and the smallest is 35, therefore, range R = 14. We will increase the range to 15 and use 5 classes, K = 5 and i = R/K = 15/5 = 3. The histogram of the distribution is shown in Figure 2.12. Page 20 of 86 Class frequency 9 8 35-37 1 7 6 frequency 38-40 2 5 4 3 41-43 4 2 1 44-46 5 0 35-37 38-40 41-43 44-46 47-49 47-49 8 category Figure 2.12: A histogram shows the distribution of data in example 2.4.1. To calculate Pearson’s coefficient of skewness using equation 2.1:  Median, Me = (45 + 46) /2 = 45.5 ⇒ 46  Mean = Σx / n = 892 / 20 = 44.6 ⇒ 45  Std. Dev, s = 3.80 Therefore, skewness = [3 x (45-46)] / 3.80 = - 0.79, which means that the histogram is left skewed. Example 2.4.2: The data below are the pH values of 20 batches of a food product, where the pH limit is 4.5 – 7.5. 6.0, 6.1, 5.9, 5.8, 5.2, 4.8, 6.4, 4.1, 7.8, 8.3, 6.2, 6.5, 5.9, 6.3, 6.8, 7.3, 6.9, 6.7, 5.3, 5.9. Discuss the distribution of data using a suitable histogram. Solution: The largest pH value is 8.3 and the smallest is 4.1, therefore, range R = 4.2. To simplify the analysis and make the histogram more informative, we will use 10 classes, K = 10, and increase the range to 5.0, therefore, i = R/K = 5/10 = 0.5. The histogram of the distribution is shown in Figure 2.13. Page 21 of 86 Range frequency 6 4.1-4.5 1 4.6-5.0 1 5 5.1-5.5 2 4 5.6-6.0 4 6.1-6.5 5 3 6.6-7.0 3 2 7.1-7.5 2 7.6-8.0 1 1 8.1-8.5 1 8.6-9.0 0 0 4.0-4.5 4.6-5.0 5.1-5.5 5.6-6.0 6.1-6.5 6.6-7.0 7.1-7.5 7.6-8.0 8.1-8.5 8.6-9.0 Figure 2.13: A histogram shows the distribution of data in example 2.4.2. From the histogram, we can conclude that process output has a Normal distribution which indicates that the process is stable. The points that are out-of-specification are not expected to result from the process itself, but may have some other reasons. 2.5. Pareto Chart (Analysis) 2.5.1. Definition and Overview: Pareto analysis uses the Pareto principle, also called the 80:20 rule, to analyze and display data. Vilfredo Pareto was a 19th – century Italian economist (1848-1923) who studied the distribution of income in Italy. He found that about 20% of the population controlled about 80% of the wealth22, 23. Quality expert, Joseph M. Juran, applied the principle to quality control, meaning that about 80% of the problems stem from 20% of the possible causes3, 5, 8. A Pareto chart is a bar chart that displays the relative importance of problems. The most important problem, e.g. highest in cost, frequency…etc, is represented by the tallest bar, the next most important problem is represented by the next tallest bar, and so on. Page 22 of 86 2.5.2. Purposes and applications of Pareto chart: - Finding out the most significant problem or cause among many. - Analyzing broad causes by looking at their specific components. - Saving money by helping to eliminate major sources of defects. - A tool for communicating with others about some particular data. 2.5.3. Procedure: 1. Decide the categories. 2. Decide the measurement, e.g. frequency, cost, time, quantity…etc. 3. Decide the period of time, e.g. day, week…etc. 4. Collect data or assemble data that already exist. 5. Subtotal measurement for each category. 6. Determine the scale for measurements, with maximum value for the largest subtotal or the sum of all subtotals. 7. Construct and label bars, with the tallest at the left. 8. Calculate the percentage for each category: the subtotal for that category divided by the total for all categories. Draw a right vertical axis and label it with percentages. 9. Calculate and draw cumulative sums24, 25. Find out the reasons that cause about 80% of the problem. Note: always develop a table including 4 columns: types or causes of defects, amounts or frequency, percentage of defects, and cumulative amount or frequency percentage. The data in the first two columns should be in descending order. Example 2.5.1: A company manufacturing electronic devices wanted to analyze the major cause(s) for the defective items. Out of 10000 items manufactured, the numbers of defective items with their causes were: component defects (13), design defects (57), build defects (4), others (2). It is obvious that design defects is the main cause of failure where these 57 design defects were distributed as follows: torque motors defect (16), transducer module (4), ASIC calibration (3), connect module (21), cold start (11), others (2). Find out the main defects causing problems with these devices. Page 23 of 86 Solution: In the first step, we construct a table for the defects with their percentages, Table 2.3, and then we make a plot (Figure 2.14) and find out the main causes that make about 80% of the problems. Table 2.3: Defects and their percentages as mentioned in example 2.5.1. Defect Count % (Count ×100 / total(57)) Cumulative % Connect module 21 36.8 36.8 Torque motors 16 28.1 64.9 Cold start 11 19.3 84.2 Transducer module 4 7.0 91.2 ASIC calibration 3 5.3 96.5 Others 2 3.5 100.0 Pareto chart 25 100.0 20 80.0 Frequency (Count) Cumulative % 15 60.0 10 40.0 5 20.0 0 0.0 Connect Torque Cold start Transducer ASIC Others module motors module calibration Figure 2.14: Pareto chart for the data in example 2.5.1. From the Pareto analysis, it is obvious that about 80 % of the problems with these electronic devices stem from the defects: connect module, torque motors, and cold start, while other defects are only minor defects. Page 24 of 86 2.5.4. Stratification: 2.5.4.1. Definition and Overview: Stratification is a technique used in combination with other data analysis tools. When the data from a variety of sources or categories have been lumped together, the meaning of the data can not be seen clearly. Therefore, stratification is used to separate data so that patterns can be seen. It is the systematic subdivision of process data to obtain a detailed understanding of the process’s structure5, 26. In cases that stratification will be used, it should be considered before collecting data. When to use stratification? - When data come from several sources or conditions, such as shifts, days of the week, suppliers, or population groups. - When data analysis may require separating different sources or conditions. 2.5.4.2. Procedure of stratification: 1- Before collecting data, consider which information about the sources of data might have an effect on the results. Set up the data collection so that you collect that information as well. 2- When plotting or graphing the collected data on a scatter diagram, control chart, histogram, or other analysis tool, use different marks or colours to distinguish data from various sources. Data that are distinguished in this way are said to be “stratified”. 3- Analyze the subsets of stratified data separately. Stratification is a useful application of Pareto charts, and in this case, it is simply the creation of a set of Pareto charts for the same data, using different possible causative factors. For example, a particular defect (or total number of defects), in some product, can be plotted against three possible sets of potential causes, e.g. production line, shift, and product type. Figure 2.16 below shows, through stratified plots, that, for that particular product, there is no significant difference in defects between product lines or shifts, but product type 3 has significantly more defects than do the others27. Page 25 of 86 Variable Stratified plot Production lines: 1 and 2 Shifts: 1, 2, and 3 Product types: 1, 2, 3, 4, and 5 Figure 2.16: Stratification can be used to find real causes for some defects27. 2.6. Control Charts: 2.6.1. Run charts: A run chart is a scatter plot of the measurements versus the time order in which the objects were produced (1=1st, 2=2nd, etc.). The data points are linked by lines (Figure 2.17). The run chart shows the history of the process. We add a horizontal line at the position of the target value if we want to see whether the process is centered on target or straying from target. Alternatively, we could use a horizontal centerline at the position of the mean if we wanted a visual basis for detecting changes in average level over time28, 29. Page 26 of 86 300 measurement 250 200 150 100 1 3 5 7 9 11 13 15 17 19 21 23 25 27 tim e Figure 2.17: A run chart is a scatter plot that shows the history of the process. Example 2.6.128: The data in Table 2.5 below shows the measures of the thickness of the resistance layer on an integrated circuits ICs manufactured by a certain company. The design of the product specifies a particular thickness of 205 μm. Use a histogram and a run chart to visualize the data. Table 2.5: Thickness of resistance layer on integrated circuits28. 206 204 206 205 214 215 205 202 210 212 202 208 207 218 188 210 209 208 203 204 200 198 200 195 201 205 205 204 203 205 202 208 203 204 208 215 202 218 220 210 Solution: Range, R = 220 – 188 = 32; make it R = 35. Select the number of classes, K = 7. Therefore, the class width, I = R / K = 35 / 7 = 5. Figure 2.18 below shows the run chart and the histogram, superimposed on each other, for the data. Page 27 of 86 Figure 2.18: A run chart and a histogram for the data in example 2.6.128. 2.6.2. Specification limits: In some cases, we put specification limits, upper specification limit (USL) and lower specification limit (LSL), on the run chart. A specification is an explicit “set of requirements to be satisfied by a material, product, or service”. Such limits externally imposed, e.g. imposed by the customer. Units falling outside these limits are unacceptable and will be rejected. It is the manufacturer job to come up with a process that produces units that fall within the specification limit3, 5, 28. As an illustration, Figure 2.19 charts the volumes of the contents of 2-liter cartons of milk sampled from a production line every 5 minutes, with specification limits imposed on the chart28. Failure of a process to deliver a product within the specification limits may result from different reasons (Figure 2.20) like28: a. Too high average level. b. Too variable process output. c. Too low average level. Page 28 of 86 Figure 2.19: A run chart for the change of filling volume of milk cartons with time. Figure 2.20: Reasons for product failure to comply with specification limits may be due to: (a) too high average level, (b) too variable process output, or (c) too low average level28. 2.6.3. Statistical Stability: A process is statistically stable over time (with respect to characteristic x) if the distribution of x does not change over time. Stability enables us to predict the range of variability to expect in our product in the future. With an unstable process, and as the distribution changes with time, we have no idea what to expect, making intelligent forward-planning virtually impossible (Figure 2.21) 28. Page 29 of 86 Figure 2.21: Future prediction and planning is possible with stable processes only28. When plotted on a run chart, a statistically stable process bounces about its center in a random fashion (Figure 2.22). Non-random behavior tells us that the process is not statistically stable over time. Gross outliers are a sign of unstable process. A process is said to be “in control” with respect to the characteristic x if the distribution of x appears to be statistically stable over time. The unstable processes are called “out-of-control” processes5, 28. Figure 2.22: A run chart for a statistically stable process. Examples of unstable (out-of-control) processes: Many reasons can cause a process to be out-of-control, which give the run chart different patterns. However, three examples will be mentioned here (Figure 2.23) 28: Page 30 of 86 a. Mean level drifting upwards: like if there was a gradual wear in one of the machine components. b. Sharp change in mean level: could arise, for example, because one component has partial failure, or an operator change to a one that does not follow the procedure properly. c. Variability increasing: like increase of standard deviation by the machine wear. Figure 2.23: Examples of unstable processes: a) mean level drifting upwards, b) sharp change in mean level, and c) variability increasing28. 2.6.4. Control Charts: Control charts are one of the most commonly used methods of statistical process control (SPC), which monitors the stability of a process. The main features of a control chart include the data points, a centerline (mean value), and upper and lower limits. Control chart is a tool used to determine whether a manufacturing or business process is in a state of “statistical control” or not. Control chart is also known as “Shewhart” chart or “process-behavior” chart. A control chart is a specific type of run chart that allows significant change to be differentiated from the natural variability of the process. 2.6.4.1. Historical overview: The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s (May 16th, 1920). Shewhart had shown through his invention that variations to a process are of two kinds, namely; “common cause” and “special cause” variation. He stressed that bringing a production process into a state of “statistical control”, where there is only “common cause” variations and keeping it in control, is necessary to predict future output and to manage a process economically29. Page 31 of 86 2.6.4.2. Purposes and applications of control charts28, 30: 1. Historical record of the behavior of a process. 2. Allow us to monitor a process for stability and centers attention on detecting and monitoring process variation over time. 3. A tool for ongoing control of a process and help us to detect changes from a previously stable pattern of variation. 4. Signal the need for the adjustment of a process. 5. Help us to detect special causes of variation. 6. Help to improve a process to perform consistently and predictably to achieve higher quality, lower cost, and higher effective capacity. 7. Serve as a common language for discussing process performance. 2.6.4.3. The function of a control chart: The run chart provides a picture of the history of the performance of the process. Control charts place additional information onto the run chart (control limits), which is aimed at helping us to decide how to react, right now, in response to the most recent information about the process shown in the charts. Human being has the temptation to tamper the process in order to keep the process output at the target value. However, according to Shewhart, when a process is subject only to variation which looks random, such tampering with a process only makes things worse (more variable). Therefore, we should not tamper with the process until the causes of the variation are well understood, i.e. while control charts show the variability of data, we should understand them first28. 2.6.4.4. Common-cause vs. special-cause variation: When trying to understand the variation evident in a run chart, we have two types of variation: a. Common-cause variation: stable, background random variation. It is present in nearly all processes. It is an inherent characteristic of the process which stems from the natural variability in inputs to the process and its operating conditions. When only common cause variation is present, adjusting the process in response to each deviation from target increases the variability3, 5, 28, 31. Page 32 of 86 b. Special-cause (or assignable cause) variation: the variation which shows up as outliers or specific identifiable patterns in the data. It is an unusual variation. Special cause variation may show due to events external to the usual functioning of the system such as new raw materials, a broken die, or an untrained new operator 3, 5, 28, 31. 2.6.4.5. The basic idea of control chart construction: Control charts are built on the basic idea of statistical analysis by plotting the mean or range of subsequent data against time. The upper and lower control limits are based on standard deviation. For a Normal distribution, 99.7 % of observations fall within μ ± 3σ (3-sigma limits), i.e. 99.7 % of observations sampled from a Normal distribution fall within 3 standard deviations of the mean. But, what if the distribution of x is not Normal? According to “Chebyshev’s Inequality”, irrespective of the distribution, at least 89 % of the observations fall within the 3-sigma limits. Thus, 3-sigma limits give a simple and appropriate way of deciding whether or not a process is in control, therefore, for good and safe control, subsequent data collected should fall within 3 standard deviations of the mean. Accordingly, if observations started to appear outside these limits then we would suspect that the process is no longer in control, and that the distribution of x had changed5, 28, 31, 32, 33. Figure 2.24 below shows the control chart for an ideal statistically stable distribution of a data set which can be displayed as a Normal Bell curve or Population Density Function (PDF) 33. Figure 2.24: Control chart showing PDF and Normal Bell-shaped curve for a data set33. Page 33 of 86 2.6.4.6. Control charts of variables data: (Control charts for groups of data): 2.6.4.6.1. Subgroups: An important factor in preparing for SPC (statistical process control) charting, i.e. construction of control charts, is determining if we will measure every product of the process, such as measuring every part, or if we will use subgroups. Subgroups are a sample of data from the total possible data. Subgroups are used when it is impractical or too expensive to collect data on every single product or service in the process. Decisions to use subgroups or not needs to be carefully thought out to ensure they accurately represent the data. Subgroups need to be homogenous within themselves so that special causes can be recognized, so problem areas stand out from the normal variation in the subgroup. For example, if you are in charge of analyzing processes in a number of facilities, a separate group should represent each facility, since each facility has different processes for doing the same tasks. Each facility subgroup should probably be broken down even further, for example by work shifts. All data in a subgroup has something in common, such as a common time of collection, all data for a particular date, a single shift, or a time of day. Subgroup data can have other factors in common, such as data associated with an operator, or data associated with a particular volume of liquid. 2.6.4.6.2. Monitoring average level: the x -chart 3, 5, 28, 32, 33: The x -chart used to look for changes in the average value of x-measurements as time goes on. As the measured characteristic of the process may not be normally distributed, we work with sample means instead of individual x-values. Data are collected in at least 20 subgroups of size 3 to 6 (typically 5) measurements and the mean of each of subgroup is computed. In this case: And almost all subgroup means will lie within the 3-sigma limits. If we know the true value, then Page 34 of 86 However, we don’t know μ or σ, therefore, control limits must be estimated from the data. Construction of the -chart: Suppose we have nk observations made up of k subgroups each of size n, as in example 2.6.1 below. The -chart parameters can be estimated as follows:  Centerline of the -chart is plotted at the level of the sample mean (average) of the k subgroups means, ( ) ⇒ x x k  s Upper and lower control limits are at x  3( ) d1 n Where s is the sample mean of the k subgroup standard deviations and d1 is the correction factor chosen so that / d1 is an unbiased estimate of σ when we are sampling from a normal distribution. Note: Sometimes (3/d1√n) called A1; therefore, limits will be at x  A1s. Example 2.6.124: Telstar Appliance Company uses a process to paint refrigerators with a coat of enamel. During each shift, a sample of 5 refrigerators is selected (1.4 hours apart) and the thickness of the paint (in mm) is determined. If the enamel is too thin, it will not provide enough protection. If it's too thick, it will result in an uneven appearance with running and wasted paint. Table 2.6 below lists the measurements from 20 consecutive shifts. In the language of the previous paragraph, a sample of 5 from the same shift is a subgroup and we have 20 subgroups, i.e. k = 20 with n = 5. Provide an -chart for this data. Page 35 of 86 Table 2.6: Raw data and calculated average values for example 2.6.128. (Subgroup) Thickness Subgroup Shift No. (mm) Mean ( ) Range (r) Std. Dev. (S) 1 2.7 2.3 2.6 2.4 2.7 2.54 0.4 0.1817 2 2.6 2.4 2.6 2.3 2.8 2.54 0.5 0.1949 3 2.3 2.3 2.4 2.5 2.4 2.38 0.2 0.0837 4 2.8 2.3 2.4 2.6 2.7 2.56 0.5 0.2074 5 2.6 2.5 2.6 2.1 2.8 2.52 0.7 0.2588 6 2.2 2.3 2.7 2.2 2.6 2.40 0.5 0.2345 7 2.2 2.6 2.4 2.0 2.3 2.30 0.6 0.2236 8 2.8 2.6 2.6 2.7 2.5 2.64 0.3 0.1140 9 2.4 2.8 2.4 2.2 2.3 2.42 0.6 0.2280 10 2.6 2.3 2.0 2.5 2.4 2.36 0.6 0.2302 11 3.1 3.0 3.5 2.8 3.0 3.08 0.7 0.2588 12 2.4 2.8 2.2 2.9 2.5 2.56 0.7 0.2881 13 2.1 3.2 2.5 2.6 2.8 2.64 1.1 0.4037 14 2.2 2.8 2.1 2.2 2.4 2.34 0.7 0.2793 15 2.4 3.0 2.5 2.5 2.0 2.48 1.0 0.3564 16 3.1 2.6 2.6 2.8 2.1 2.64 1.0 0.3647 17 2.9 2.4 2.9 1.3 1.8 2.26 1.6 0.7021 18 1.9 1.6 2.6 3.3 3.3 2.54 1.7 0.7829 19 2.3 2.6 2.7 2.8 3.2 2.72 0.9 0.3271 20 1.8 2.8 2.3 2.0 2.9 2.36 1.1 0.4827 Column mean = 2.514 0.77 0.3101 ( ) ( ) ( ) Solution: to construct an -chart, all we need is to calculate the average of the subgroup means ( ) and the average of the subgroup standard deviations ( ). Also we need the value of d1, which can be found in Table 2.7 along with other constants needed for construction control charts. (d1 = 1.1894). Centerline at = 2.514 mm s Upper control limit at x  3( ) = 2.514 + (3×0.3101/1.1894√5) d1 n = 2.514 + 0.350 = 2.864 mm s Lower control limit at x  3( ) = 2.514 - (3×0.3101/1.1894√5) d1 n = 2.514 - 0.350 = 2.164 mm Page 36 of 86 The resulting -chart is shown in Figure 2.25, which is a plot of for each shift against shift number with the centerline and upper and lower control limits imposed. Figure 2.25: -chart for the data in example 2.6.1, using to calculate control limits28. Alternatively, (average of subgroup ranges) can be used instead of to estimate control limits, especially when the charts are constructed and updated by hand, although it is less precise. The correction factor to be used is d2 instead of d1. In this case, control limits are estimated as follows: (d2 = 2.3259 from Table 2.7) r Upper control limit at x  3( ) = 2.514 + (3×0.77/2.3259√5) d2 n = 2.96 mm r Lower control limit at x  3( ) = 2.514 - (3×0.77/2.3259√5) d2 n = 2.07 mm Note: Sometimes (3/d2√n) called A2; therefore, limits will be at x  A2 r. The resulting -chart is shown in Figure 2.26. Note that point 11 still an outlier even when using the less precise to calculate control limits which indicate that the process is out-of-control. However, since only one point (11th) is out-of-control limits, we should check if it is an isolated Page 37 of 86 problem or not, by looking for any reasons that may caused the problem, and if the reason is found, the point should be removed and the chart recalculated. Figure 2.26: -chart for the data in example 2.6.1, using to calculate control limits28. An important point that should be mentioned here is that in practice, in order to construct a control chart for the first time, it is usual to have a set up phase of perhaps 20 to 30 subgroups from which the centerline and control limits are calculated. After that and during data collection, limits can be recalculated and updated. Table 2.7: Constants for the Construction of Control Charts28. n d1 d2 A1 A2 D3 D4 B3 B4 2 1.7725 1.1284 1.1968 1.8800 0.0000 3.2665 0.0000 10.8276 3 1.3820 1.6926 1.2533 1.0233 0.0000 2.5746 0.0010 6.9078 4 1.2533 2.0588 1.1968 0.7286 0.0000 2.2820 0.0081 5.4221 5 1.1894 2.3259 1.1280 0.5768 0.0000 2.1145 0.0227 4.6167 6 1.1512 2.5344 1.0638 0.4832 0.0000 2.0038 0.0420 4.1030 7 1.1259 2.7044 1.0071 0.4193 0.0757 1.9243 0.0635 3.7430 8 1.1078 2.8472 0.9575 0.3725 0.1362 1.8638 0.0855 3.4746 9 1.0942 2.9700 0.9139 0.3367 0.1840 1.8160 0.1071 3.2656 Page 38 of 86 2.6.4.6.3. Control charts for variation: the R-chart 3, 5, 28, 33 An -chart focuses attention on the constancy of average level (μ) and is not good at detecting changes in variability (σ). An R-chart (or range chart) is specifically designed for detecting changes in variability. In this case, we plot the subgroup ranges, ri, rather than the subgroup means ( ). The R-chart is constructed as follows:  Centerline at  Upper control limit at μR + 3σR = + 3σR = D4 where D4 is a constant.  Lower control limit at μR - 3σR = - 3σR = D3 where D3 is a constant. For the previous example 2.6.1, = 0.77 and from Table 2.7, and for n = 5, the values of the constants are: D3 = 0 and D4 = 2.1145. Therefore, Upper control limit = D4 = 1.63 Lower control limit = D3 =0 The R-chart is shown Figure 2.27, is obtained by plotting ri for each shift against shift number with the centerline and upper and lower control limits imposed28. Figure 2.27: R-chart for the data in example 2.6.128. The R-chart clearly indicates that the process is out-of-control because, firstly, one point is above the upper control limit and, secondly, the range seems to be steadily increasing, telling us that the Page 39 of 86 variability of the original paint thickness is increasing over time. Therefore, -chart should be considered along with R-chart, because painting process is out-of-control not because of changes in mean thickness but because of increasing variability in paint thicknesses. In using the R-chart, subgroup ranges lying outside the control limits indicate that the process is out-of-control. Trends in the R-chart may indicate a problem like wear in the machine. Note: When is calculated and used instead of , we construct what is called s-chart, and in this case we use the constants B3 and B4 instead of D3 and D4. 2.6.4.6.4. Control charts for single measurements: the “Individuals Chart” 3, 5, 28 Sometimes data may be available only weekly, monthly, or even yearly, therefore, no sufficient data quickly enough available to be able to form subgroups. In this case we are forced to construct a control chart using just individual measurements of the form μx ± 3σx. In this case, the centerline is located at μ = , the sample mean of all observations. To estimate variability, we take 3σx = 2.66 , where is the average moving range, which is the average of all unsigned differences between consecutive measurements (at least 20 measurements). Therefore, control limits can be calculated as follows: Upper control limit at x  2.66 mR Lower control limit at x  2.66 mR Where = ∑|differences between consecutive measurements|/(n-1) In case that observations are very costly, or time period between observations is long, the number of observations may be less than 20; therefore, we take the limits as x  1.77 mR. Example 2.6.2: A particular product is used as a texture coating. To check on the production process, 21 samples were taken over a period of 2 months in 1994. A number of measurements were carried out on each subgroup. The Table2.8 below gives measurements on one of the variables, pH. Obtain a control chart for the pH measurements. Solution: The “Individuals-chart” parameters are as follows: Centerline at =∑ (pH values) / 21 = 7.7 Page 40 of 86 = (0.1+0.3+…+0.3) / 20 = 0.265 Upper control limit = 7.7 + (2.66 × 0.265) = 8.5 Lower control limit = 7.7 - (2.66 × 0.265) = 7.0 The resulting Individuals-chart is shown in Figure 2.28, which is a plot of pH values against time (sample no.) with centerline and control limits imposed. Table 2.8: pH measurements for texture coatings of 21 samples. Sample N. pH value |Difference| 1 7.9 2 8.0 0.1 3 7.7 0.3 4 7.7 0.0 5 7.8 0.1 6 8.0 0.2 7 8.4 0.4 8 7.5 0.9 9 7.7 0.2 10 7.8 0.1 11 7.9 0.1 12 7.7 0.2 13 7.2 0.5 14 7.7 0.5 15 7.8 0.1 16 7.7 0.1 17 7.3 0.4 18 7.7 0.4 19 7.6 0.1 20 7.9 0.3 21 7.6 0.3 Page 41 of 86 individual control chart 8.6 8.4 8.2 8.0 pH 7.8 7.6 7.4 7.2 7.0 0 3 6 9 12 15 18 21 time progress Figure 2.28: Individuals-chart for the data in example 2.6.2. The Individuals-chart clearly indicates that all the points lie within the control limits. This suggests that the situation is stable with regard to the pH measurements of the coating material. 2.6.4.7. Control charts of attributes data 3, 5, 28: In any process, there are usually many items that do not really conform to the specification of the process output, and as these out-of-specification items represent extra costs to the process, it is very important to control all defective items. Therefore, before going through attribute control charts, we should understand and differentiate a defect in an item and a defective item.  A defective item: is a nonconforming unit as it is unusable for its intended purpose, it should be discarded, reworked, returned…etc  A defect: is an imperfection of some type that is undesirable, although it does not necessarily render the entire good or service unusable. Accordingly, one or more defects may not make an entire good or service defective. For example, we would not scrap or discard a computer or a washing machine because of a small scratch in the paint!! When there are multiple opportunities for defects or imperfections in a given unit (such as a large sheet of fabric), we can call each such unit an area of opportunity, where each area of Page 42 of 86 opportunity is a subgroup. Therefore, a subgroup, in attribute control charts, is the group of units that were inspected to obtain the number of defects or the number of rejects (defective items). When areas of opportunity (or subgroups or items) are discrete units and a single defect will make the entire unit defective, we use p-charts (or np-charts) to monitor and control such defective items. However, when areas of opportunity (or subgroups or items) are continuous and more than one defect may occur in a given area of opportunity without making the whole item a defective item, we use c-charts or u-charts should to monitor and control such number of defects. 2.6.4.7.1. Control charts for proportions: p-chart 3, 5, 28, 32 The p-chart is used to chart the proportion of items produced by a process that are defective. The p-charts monitor the behavior of the sample proportion of defective items over time. To construct a p-chart of a constant subgroup size, we select subgroups of fixed n items and compute the sample proportion of defectives for each subgroup, and then we plot each successive subgroup proportion on the chart. For k subgroups (e.g. at least 20, all of size n) the total number inspected is nk. A natural estimate of p, the unknown true probability that an item is produced defective, is to use p, the average of all subgroup proportions. The average is defined as: p = total number of defectives / total number inspected To plot a p-chart, we calculate the chart parameters as follows:  Center line at p p (1  p )  UCL = p  3 n p (1  p )  LCL = p 3 n Where n =  ni / k = total production / number of subgroups; or simply the constant subgroup size. It is obvious that for any case LCL is negative, we put LCL = 0 Finally, the defective proportion (p) of a subgroup is calculated by: p = number of defects / subgroup size ni Page 43 of 86 Example 2.6.3: An importer of decorative ceramic tiles, wants to monitor the proportion of defective tiles since some tiles are cracked or broken before or during transit, rendering them useless scrap. Each day a sample of 100 tiles is drawn from the total of all tiles received from each tiles vendor. The following table (Table 2.9) represents the sample results for 30 days of incoming shipments for a particular vendor. Obtain a p-chart for the data. Table 2.9: Proportions of defectives mentioned in example 2.6.3. Day Sample size Defectives Proportion (p = Defectives/ni) 1 100 14 0.14 2 100 2 0.02 3 100 11 0.11 4 100 4 0.04 5 100 9 0.09 6 100 7 0.07 7 100 4 0.04 8 100 6 0.06 9 100 3 0.03 10 100 2 0.02 11 100 3 0.03 12 100 8 0.08 13 100 4 0.04 14 100 15 0.15 15 100 5 0.05 16 100 3 0.03 17 100 8 0.08 18 100 4 0.04 19 100 2 0.02 20 100 5 0.05 21 100 5 0.05 22 100 7 0.07 23 100 9 0.09 24 100 1 0.01 25 100 3 0.03 26 100 12 0.12 27 100 9 0.09 28 100 3 0.03 29 100 6 0.06 30 100 9 0.09 Totals 3000 183 Page 44 of 86 Solution: the p-chart parameters are as follows:  Center line at p p = total number of defectives / total number inspected = 183 / 3000 = 0.061 0.061(1  0.061)  UCL = 0.061 3  0.133 100 0.061(1  0.061)  LCL = 0.061 3  0.011 100 As a negative lower control limit is meaningless, we put LCL = 0 The resulting p-chart is shown in Figure 2.29. p-chart 0.2 0.15 Proportion of defectives (p) 0.1 0.05 0 -0.05 0 5 10 15 20 25 30 35 Day Figure 2.29: p-Chart for the data mentioned in example 2.6.3. The p-chart indicate that the process is out of control due to the existence of outliers in days 1 & 14, however, further investigation revealed that on both days (1&14) the regular delivery truck driver was absent because of illness, therefore, another untrained driver loaded and drove the delivery truck on those days. In many cases, the amount of items produced (or inspected) is not constant in size. In these cases, we use the average number of items, n , instead of n in the calculation of upper and lower control limits. Therefore, the control limits will be calculated as follows: Page 45 of 86 p (1  p ) UCL = p  3 n p (1  p ) LCL = p  3 n Where n = ∑ni / k = Total production / number of groups. The use of n is valid only when any n does not differ from n by more than 25%. Defective proportion (p) of a subgroup is calculated by: p = number of defects / subgroup size ni Example 2.6.424: Table 2.10 below gives the number of units produced by a company each week during part of 1994 which require rework at certain stages of production because of faults. Chart the percentages requiring rework (defectives) over time to see whether the process is in control. Solution: Centerline at = total number of defects / total number inspected = 1404 / 126967 = 0.011058 n = ∑ni / k = Total production / number of groups = 126967/35 = 3627.63 A variation of 25 % on this would give a range of ni values of about (2700-4500), which cover all n values except the subgroups 6 and 35 (shorter weeks because of holidays), therefore, n is accepted. The resulting p-chart is shown in Figure 2.30, which is a plot of the subgroup defective proportions (p) against subgroup number (week number), with the centerline and control limits imposed. The p-chart clearly indicates that the process is out-of-control28. Page 46 of 86 Table 2.10: The number of units per week requiring rework at certain stages of production28. Week ending Subgroup no. Number of defects Total production Proportion (p) 7/5 1 35 3662 0.0096 14 2 52 3723 0.0140 21 3 37 3633 0.0102 28 4 31 3664 0.0085 4/6 5 23 3448 0.0067 11 6 31 2630 0.0118 18 7 21 3580 0.0059 25 8 30 3278 0.0092 2/7 9 20 3797 0.0053 9 10 20 3893 0.0051 16 11 40 3991 0.0100 23 12 65 3760 0.0173 30 13 58 3590 0.0162 6/8 14 78 3108 0.0251 13 15 43 3759 0.0114 20 16 30 3606 0.0083 27 17 29 3530 0.0082 3/9 18 56 3621 0.0155 10 19 41 3888 0.0105 17 20 32 3854 0.0083 24 21 81 3864 0.0210 1/10 22 74 3846 0.0192 8 23 24 3856 0.0062 15 24 42 4072 0.0103 22 25 35 3693 0.0095 29 26 15 3394 0.0044 5/11 27 18 4152 0.0043 12 28 25 4012 0.0062 19 29 57 3698 0.0154 26 30 57 3658 0.0156 3/12 31 42 3236 0.0130 10 32 71 3913 0.0181 17 33 40 3655 0.0109 24 34 24 3542 0.0068 21/1 35 27 2356 0.0115 Total 1404 126967 Page 47 of 86 Figure 2.30: p-Chart for the data in example 2.6.428. Notes:  For any case that lower control limit (LCL) is negative, we put LCL = 0.  Since n varies, we can look at each point outside the limits and re-compute the UCL and LCL using the correct value of ni. 2.6.4.7.2. Control charts for monitoring the number of defects: c-chart 3, 5 When areas of opportunity (or items) are continuous and more than one defect may occur in a given area of opportunity without making the whole item a defective, a c-chart can be used to control the process. The c-chart is used when the areas of opportunity are of constant size, while the u-chart is used when the areas of opportunity are not of constant size. Examples of continuous subgroups (areas of opportunity) are enamel coat on an appliance, roll of cloth or plastic film, a month (if you are counting the number of accidents in a month), and a complex machinery like a computer (if you are counting the number of imperfections). On the other hand, examples of constant size continuous areas of opportunity are: one TV set of particular model, a particular type of hospital room, one purchase order, a 5 m2 of paper board, and a 10 m wire. Page 48 of 86 It is important to mention here that if we are to use the c-chart (or u-chart), the events under study must have the following conditions: Events must be describable as discrete events, events must occur randomly within some well defined area of opportunity (or a subgroup), events should be relatively rare, and events should be independent of each other. In order to construct a c-chart, we denote the number of events (e.g. defects) in any particular item (subgroup) by c. The c-chart is constructed by plotting the sequence of successive c values versus time. The centerline for the c-chart is the average number of events observed. It is calculated by: Total numberof eventsobserved Centerline c  Numberof items (areasof opportunity) Control limits are calculated using SD = c (Note: standard deviation is sometimes called standard error). Therefore, control limits occur at: UCL  c  3 c LCL  c  3 c Example 2.6.5: Consider the output of a paper mill: the product appears at the end of a web and is rolled onto a spool called a reel. Every reel is examined for blemishes, which are imperfections. Each reel is an area of opportunity. Results of these inspections produce the data are shown in Table 2.11. Construct a c-chart and determine if the process is in control or not. Solution:  Center line is at c = 150/25 = 6.00  Control limits are at c 3 c and since standard error = √6.00 = 2.45  UCL = 6.00 + 3(2.45) = 13.35  LCL = 6.00 – 3(2.45) = -1.35 (use LCL = 0) The c-chart is shown in Figure2.31, which clearly indicates that the number of defects in the examined reels is in control. Also, it should be noted here that the conditions for using a c-chart Page 49 of 86 are met here; imperfections are discrete events, independent of each other, relatively rare, and we assume that they are random. Table 2.11: Number of blemishes in different reels (Example 2.6.5). Reel No. No. of blemishes Reel No. No. of blemishes 1 4 14 9 2 5 15 1 3 5 16 1 4 10 17 6 5 6 18 10 6 4 19 3 7 5 20 7 8 6 21 4 9 3 22 8 10 6 23 7 11 6 24 9 12 7 25 7 13 11 Total 150 c-Chart 16 14 12 No. of defects 10 8 6 4 2 0 0 5 10 15 20 25 Reel No. Figure 2.6.31: c-chart of the number of defects per reel (example 2.6.5) Finally, we should mention here that control chart obtained by plotting the number of nonconformities per unit of product, u, is called u-chart. The control chart lines for sample size n are: Page 50 of 86  Central line at u  Upper and lower control limits at u  3 u n Where u is defined as the total number of nonconformities in all samples divided by the total number of units in all samples, or it is the nonconformities per unit in the complete set of test results. We will not go in more details regarding the u-chart and if you are interested in more details, you can go for other more elaborated references. 2.6.4.8. General summary: 3, 5, 28, 32, 33 Before constructing and using a control chart, it is very important to realize the following:  Processes that produce data that exhibits natural or common-cause variation will form a distribution that looks like a bell-curve. For these types of processes, control charts should provide useful information.  If the data is not normally distributed, does not form a bell-curve, the process is already out of control so it is not predictable. In this case we must look for ways to bring the process into control. For example, the data may be too broad, using measurements from different work shifts that have different process outcomes.  Every process, by definition, should display some regularity. Organizing the data collection into rational subgroups, each of which could be in control, is the first step to using control charts.  It is very important to distinguish between control process and improvement process: The control process detects and takes action on sporadic quality problems, while, the improvement process identifies and takes action on chronic quality problems. From the preceding discussion, we can summarize the following: - Specification limits: are limits for the required characteristics of a particular product (or service) which are externally imposed by the customer, and applied to individual units. - Control limits: indicate the limits of variability in a process under control. Page 51 of 86 - If a process is under control but not delivering within specification limits, fundamental changes are required those are not indicated by the charts. - It is often easier to change the mean level of a process than its variability. Practical points for obtaining and use of control charts: - Team work is needed for establishing control charts. - For complex processes, several charts are better obtained at the strategic points. - Once the chart signals the “out-of-control”, immediate action is necessary. - In construction the control chart, the mean would be most sensitive when the measurements are close together, such as consecutive observations, because, in this case, all of them are taken under the same conditions. - The system of measurements needs to be reliable and reproducible. - Periodically, the centerline and control limits would need to be updated using past data collected when the process was in control. Out-of-control points can be ignored in the calculations. 2.6.4.9. Out-of-control behavior: Reading control charts 3, 8, 28, 31, 32, 33 We have used the phrase “in control" in a technical sense only of points lying between two lines; namely, the control limits. However this does not necessarily mean that the distribution of x is actually stable. Control charts can determine whether a process is behaving in an "unusual" way. We will now discuss some additional indicators of out of control behavior, namely criteria other than points lying outside the control limits. The quality of the individual points of a subset is determined unstable if any of the following occurs: 1. One point outside the control limits, i.e. beyond 3σ limit. 2. Nine points, or more, in a row above or below the center line. This trend may suggest: tool wear, improved training for the operator, or a gradual shift to new materials etc. 3. Six points in a row steadily increasing or decreasing. 4. Fourteen points in a row alternating up and down (not necessarily crossing the centerline). This trend could indicate too much adjustment going on after each sample. Page 52 of 86 Two further tests are suggested when it is economically desirable to have early warning. They are based on dividing the region between the control limits into 6 zones, each 1-sigma wide, with three above the center line and three below. The top three are respectively labeled A (outer third), B (middle third) and C (inner third), and the bottom three are the mirror image. 5. Two out of three consecutive points fall beyond 2σ on the same side of the centerline. 6. Four out of five consecutive points fall beyond 1σ on the same side of the centerline. We should mention here that rules 5 and 6 are appropriate when there may be more than one data source. Therefore, only rules 1 to 4 should be considered in most cases. Figure 2.32 below shows some examples on the out-of-control behavior. Figure 2.32: Some examples on the out-of-control behavior33. Page 53 of 86 2.7. Scatter Diagrams 2.7.1. Definition and Overview: A scatter diagram is a tool for analyzing relationships between variables. One variable is plotted on the horizontal axis and the other is plotted on the vertical axis. The pattern of their intersecting points can graphically show relationship patterns. Most often a scatter diagram is used to prove cause-and-effect relationships; however, it does not by itself prove that one variable causes the other. In addition, a scatter diagram can show that two variables are from a common cause that is unknown. The independent variable usually plotted on the x-axis, while the dependent variable on the y-axis34, 35, 36. As a summary, it is evident that a scatter diagram shows correlation between two items for three reasons37: a. There is a cause and effect relationship between the two measured items, where one is causing the other (at least in part). b. The two measured items are both caused by a third item. For example, a Scatter Diagram which shows a correlation between cracks and transparency of glass utensils because changes in both are caused by changes in furnace temperature. c. Complete coincidence. It is possible to find high correlation of unrelated items, such as the number of ants crossing a path and newspaper sales. Scatter Diagrams may thus be used to give evidence for a cause and effect relationship, but they alone do not prove it35, 36, 37. Usually, it also requires a good understanding of the system being measured, and may required additional experiments. Figure 2.33 below shows an example of a scatter diagram that clarifies the ideas mentioned above37. 2.7.2. Uses of scatter diagrams35 - To examine theories about cause-and-effect relationships. - To identify potential root causes of problems. - After brainstorming causes and effects using a fishbone diagram, to determine, objectively, whether a particular cause and effect are related. Page 54 of 86 - To determine whether two effects those appear to be related both occur with the same cause. Figure 2.33: Points on a scatter diagram37. 2.7.3. Scatter diagram procedure35, 36 1- Collect pairs of data where a relationship is suspected, with the preferable amount of not less than 30 to 50 paired samples of data. 2- Draw a graph with the independent variable on the horizontal axis (increasing from left to right) and the dependent variable on the vertical axis (increasing from bottom to top). 3- Look at the pattern of points to see if a relationship is obvious. Try to plot the best fit line or curve using regression analysis if possible. 2.7.4. Interpreting scatter diagrams: Scatter diagrams can show different types of correlation32, 33. Table 2.11 below shows the different degrees of correlation that can be encountered in scatter diagrams along with their interpretation37. Page 55 of 86 We should stress here that there is no one clear degree of correlation above which a clear relationship can be said to exist. Instead, as the degree of correlation increases, the probability of that relationship also increases. Table 2.11: Degrees of correlation in scatter diagrams37. Degree of Scatter Diagram Interpretation Correlation No relationship can be seen. The 'effect' is not related to the 'cause' in any way. None A vague relationship is seen. The 'cause' may affect the 'effect', but only distantly. There are either Low more immediate causes to be found or there is significant variation in the 'effect'. The points are grouped into a clear linear shape. It is probable that the 'cause' is directly related to the High 'effect'. Hence, any change in 'cause' will result in a reasonably predictable change in 'effect'. All points lie on a line (which is usually straight). Given any 'cause' value, the corresponding 'effect' Perfect value can be predicted with complete certainty. 2.7.5. Types of correlation36, 37 If there is sufficient correlation, then the shape of the scatter diagram will indicate the type of correlation. The most common shape is a straight line, either sloping up (positive Page 56 of 86 correlation) or sloping down (negative correlation). Table 2.1237 below shows the different types of correlation that can be encountered in scatter diagrams along with their interpretation. A final point that should be indicated here is that points which appear well outside a visible trend region may be due to special causes of variation, and should be investigated as such. Table 2.12: Types of correlation in scatter diagrams37. Types of Scatter Diagram Interpretation Correlation Straight line, sloping up from left to right. Increasing the value of the 'cause' results in a proportionate Positive increase in the value of the 'effect'. Straight line, sloping down from left to right. Increasing the value of the 'cause' results in a proportionate Negative decrease in the value of the 'effect'. Various curves, typically U- or S- shaped. Changing the value of the 'cause' results in the 'effect' Curved changing differently, depending on the position on the curve. Part of the diagram is a straight line (sloping up or down). May be due to breakdown or overload of Part linear 'effect', or is a curve with a part that approximates to a straight line (which may be treated as such). Page 57 of 86 2.7.6. Scatter Diagram considerations: - Even if a scatter diagram shows a relationship, do not assume that one variable caused the other. Both may be influenced by a third variable35, 37. - If a scatter diagram shows no relationship, consider whether the independent (x-axis) variable has been varied widely. Sometimes, a relationship is not apparent because the data do not cover a wide enough range35. - If a scatter diagram shows no relationship between variables, consider whether the data might be stratified35. - Drawing a scatter diagram is the first step in looking for a relationship between variables35, 36, 37. 2.7.7. Stratification of scatter diagrams: As mentioned before, stratification is a technique used to separate and analyze data collected from a

Use Quizgecko on...
Browser
Browser