Data Preprocessing PDF
Document Details
![SubstantiveMint](https://quizgecko.com/images/avatars/avatar-19.webp)
Uploaded by SubstantiveMint
Tags
Summary
This chapter provides an overview of data preprocessing techniques in data mining. It discusses the importance of data quality, including accuracy, completeness, consistency, and timeliness in ensuring high-quality mining results. Several data preprocessing methods are highlighted, including data cleaning, integration, reduction, and transformation.
Full Transcript
HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 83 #1 Data Preprocessing 3 Today’s real-world databases are highly susceptible to noisy,...
HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 83 #1 Data Preprocessing 3 Today’s real-world databases are highly susceptible to noisy, missing, and inconsistent data due to their typically huge size (often several gigabytes or more) and their likely origin from multiple, heterogenous sources. Low-quality data will lead to low-quality mining results. “How can the data be preprocessed in order to help improve the quality of the data and, consequently, of the mining results? How can the data be preprocessed so as to improve the efficiency and ease of the mining process?” There are several data preprocessing techniques. Data cleaning can be applied to remove noise and correct inconsistencies in data. Data integration merges data from multiple sources into a coherent data store such as a data warehouse. Data reduction can reduce data size by, for instance, aggregating, eliminating redundant features, or clustering. Data transformations (e.g., normalization) may be applied, where data are scaled to fall within a smaller range like 0.0 to 1.0. This can improve the accuracy and efficiency of mining algorithms involving distance measurements. These techniques are not mutually exclusive; they may work together. For example, data cleaning can involve transformations to correct wrong data, such as by transforming all entries for a date field to a common format. In Chapter 2, we learned about the different attribute types and how to use basic statistical descriptions to study data characteristics. These can help identify erroneous values and outliers, which will be useful in the data cleaning and integration steps. Data processing techniques, when applied before mining, can substantially improve the overall quality of the patterns mined and/or the time required for the actual mining. In this chapter, we introduce the basic concepts of data preprocessing in Section 3.1. The methods for data preprocessing are organized into the following categories: data cleaning (Section 3.2), data integration (Section 3.3), data reduction (Section 3.4), and data transformation (Section 3.5). Data Mining: Concepts and Techniques c 2012 Elsevier Inc. All rights reserved. 83 HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 84 #2 84 Chapter 3 Data Preprocessing 3.1 Data Preprocessing: An Overview This section presents an overview of data preprocessing. Section 3.1.1 illustrates the many elements defining data quality. This provides the incentive behind data prepro- cessing. Section 3.1.2 outlines the major tasks in data preprocessing. 3.1.1 Data Quality: Why Preprocess the Data? Data have quality if they satisfy the requirements of the intended use. There are many factors comprising data quality, including accuracy, completeness, consistency, timeliness, believability, and interpretability. Imagine that you are a manager at AllElectronics and have been charged with ana- lyzing the company’s data with respect to your branch’s sales. You immediately set out to perform this task. You carefully inspect the company’s database and data warehouse, identifying and selecting the attributes or dimensions (e.g., item, price, and units sold) to be included in your analysis. Alas! You notice that several of the attributes for various tuples have no recorded value. For your analysis, you would like to include informa- tion as to whether each item purchased was advertised as on sale, yet you discover that this information has not been recorded. Furthermore, users of your database system have reported errors, unusual values, and inconsistencies in the data recorded for some transactions. In other words, the data you wish to analyze by data mining techniques are incomplete (lacking attribute values or certain attributes of interest, or containing only aggregate data); inaccurate or noisy (containing errors, or values that deviate from the expected); and inconsistent (e.g., containing discrepancies in the department codes used to categorize items). Welcome to the real world! This scenario illustrates three of the elements defining data quality: accuracy, com- pleteness, and consistency. Inaccurate, incomplete, and inconsistent data are common- place properties of large real-world databases and data warehouses. There are many possible reasons for inaccurate data (i.e., having incorrect attribute values). The data col- lection instruments used may be faulty. There may have been human or computer errors occurring at data entry. Users may purposely submit incorrect data values for manda- tory fields when they do not wish to submit personal information (e.g., by choosing the default value “January 1” displayed for birthday). This is known as disguised missing data. Errors in data transmission can also occur. There may be technology limitations such as limited buffer size for coordinating synchronized data transfer and consump- tion. Incorrect data may also result from inconsistencies in naming conventions or data codes, or inconsistent formats for input fields (e.g., date). Duplicate tuples also require data cleaning. Incomplete data can occur for a number of reasons. Attributes of interest may not always be available, such as customer information for sales transaction data. Other data may not be included simply because they were not considered important at the time of entry. Relevant data may not be recorded due to a misunderstanding or because of equipment malfunctions. Data that were inconsistent with other recorded data may HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 85 #3 3.1 Data Preprocessing: An Overview 85 have been deleted. Furthermore, the recording of the data history or modifications may have been overlooked. Missing data, particularly for tuples with missing values for some attributes, may need to be inferred. Recall that data quality depends on the intended use of the data. Two different users may have very different assessments of the quality of a given database. For example, a marketing analyst may need to access the database mentioned before for a list of cus- tomer addresses. Some of the addresses are outdated or incorrect, yet overall, 80% of the addresses are accurate. The marketing analyst considers this to be a large customer database for target marketing purposes and is pleased with the database’s accuracy, although, as sales manager, you found the data inaccurate. Timeliness also affects data quality. Suppose that you are overseeing the distribu- tion of monthly sales bonuses to the top sales representatives at AllElectronics. Several sales representatives, however, fail to submit their sales records on time at the end of the month. There are also a number of corrections and adjustments that flow in after the month’s end. For a period of time following each month, the data stored in the database are incomplete. However, once all of the data are received, it is correct. The fact that the month-end data are not updated in a timely fashion has a negative impact on the data quality. Two other factors affecting data quality are believability and interpretability. Believ- ability reflects how much the data are trusted by users, while interpretability reflects how easy the data are understood. Suppose that a database, at one point, had several errors, all of which have since been corrected. The past errors, however, had caused many problems for sales department users, and so they no longer trust the data. The data also use many accounting codes, which the sales department does not know how to interpret. Even though the database is now accurate, complete, consistent, and timely, sales department users may regard it as of low quality due to poor believability and interpretability. 3.1.2 Major Tasks in Data Preprocessing In this section, we look at the major steps involved in data preprocessing, namely, data cleaning, data integration, data reduction, and data transformation. Data cleaning routines work to “clean” the data by filling in missing values, smooth- ing noisy data, identifying or removing outliers, and resolving inconsistencies. If users believe the data are dirty, they are unlikely to trust the results of any data mining that has been applied. Furthermore, dirty data can cause confusion for the mining procedure, resulting in unreliable output. Although most mining routines have some procedures for dealing with incomplete or noisy data, they are not always robust. Instead, they may concentrate on avoiding overfitting the data to the function being modeled. Therefore, a useful preprocessing step is to run your data through some data cleaning routines. Section 3.2 discusses methods for data cleaning. Getting back to your task at AllElectronics, suppose that you would like to include data from multiple sources in your analysis. This would involve integrating multiple databases, data cubes, or files (i.e., data integration). Yet some attributes representing a HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 86 #4 86 Chapter 3 Data Preprocessing given concept may have different names in different databases, causing inconsistencies and redundancies. For example, the attribute for customer identification may be referred to as customer id in one data store and cust id in another. Naming inconsistencies may also occur for attribute values. For example, the same first name could be registered as “Bill” in one database, “William” in another, and “B.” in a third. Furthermore, you sus- pect that some attributes may be inferred from others (e.g., annual revenue). Having a large amount of redundant data may slow down or confuse the knowledge discov- ery process. Clearly, in addition to data cleaning, steps must be taken to help avoid redundancies during data integration. Typically, data cleaning and data integration are performed as a preprocessing step when preparing data for a data warehouse. Addi- tional data cleaning can be performed to detect and remove redundancies that may have resulted from data integration. “Hmmm,” you wonder, as you consider your data even further. “The data set I have selected for analysis is HUGE, which is sure to slow down the mining process. Is there a way I can reduce the size of my data set without jeopardizing the data mining results?” Data reduction obtains a reduced representation of the data set that is much smaller in volume, yet produces the same (or almost the same) analytical results. Data reduction strategies include dimensionality reduction and numerosity reduction. In dimensionality reduction, data encoding schemes are applied so as to obtain a reduced or “compressed” representation of the original data. Examples include data compression techniques (e.g., wavelet transforms and principal components analysis), attribute subset selection (e.g., removing irrelevant attributes), and attribute construction (e.g., where a small set of more useful attributes is derived from the original set). In numerosity reduction, the data are replaced by alternative, smaller representa- tions using parametric models (e.g., regression or log-linear models) or nonparametric models (e.g., histograms, clusters, sampling, or data aggregation). Data reduction is the topic of Section 3.4. Getting back to your data, you have decided, say, that you would like to use a distance- based mining algorithm for your analysis, such as neural networks, nearest-neighbor classifiers, or clustering.1 Such methods provide better results if the data to be ana- lyzed have been normalized, that is, scaled to a smaller range such as [0.0, 1.0]. Your customer data, for example, contain the attributes age and annual salary. The annual salary attribute usually takes much larger values than age. Therefore, if the attributes are left unnormalized, the distance measurements taken on annual salary will generally outweigh distance measurements taken on age. Discretization and concept hierarchy gen- eration can also be useful, where raw data values for attributes are replaced by ranges or higher conceptual levels. For example, raw values for age may be replaced by higher-level concepts, such as youth, adult, or senior. Discretization and concept hierarchy generation are powerful tools for data min- ing in that they allow data mining at multiple abstraction levels. Normalization, data 1 Neural networks and nearest-neighbor classifiers are described in Chapter 9, and clustering is discussed in Chapters 10 and 11. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 87 #5 3.2 Data Preprocessing: An Overview 87 discretization, and concept hierarchy generation are forms of data transformation. You soon realize such data transformation operations are additional data preprocessing procedures that would contribute toward the success of the mining process. Data integration and data discretization are discussed in Sections 3.5. Figure 3.1 summarizes the data preprocessing steps described here. Note that the pre- vious categorization is not mutually exclusive. For example, the removal of redundant data may be seen as a form of data cleaning, as well as data reduction. In summary, real-world data tend to be dirty, incomplete, and inconsistent. Data pre- processing techniques can improve data quality, thereby helping to improve the accuracy and efficiency of the subsequent mining process. Data preprocessing is an important step in the knowledge discovery process, because quality decisions must be based on qual- ity data. Detecting data anomalies, rectifying them early, and reducing the data to be analyzed can lead to huge payoffs for decision making. Data cleaning Data integration Data reduction Attributes Attributes A1 A2 A3... A126 A1 A3... A115 Transactions T1 T1 Transactions T2 T4 T3... T4 T1456... T2000 Data transformation !2, 32, 100, 59, 48 !0.02, 0.32, 1.00, 0.59, 0.48 Figure 3.1 Forms of data preprocessing. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 88 #6 88 Chapter 3 Data Preprocessing 3.2 Data Cleaning Real-world data tend to be incomplete, noisy, and inconsistent. Data cleaning (or data cleansing) routines attempt to fill in missing values, smooth out noise while identi- fying outliers, and correct inconsistencies in the data. In this section, you will study basic methods for data cleaning. Section 3.2.1 looks at ways of handling missing values. Section 3.2.2 explains data smoothing techniques. Section 3.2.3 discusses approaches to data cleaning as a process. 3.2.1 Missing Values Imagine that you need to analyze AllElectronics sales and customer data. You note that many tuples have no recorded value for several attributes such as customer income. How can you go about filling in the missing values for this attribute? Let’s look at the following methods. 1. Ignore the tuple: This is usually done when the class label is missing (assuming the mining task involves classification). This method is not very effective, unless the tuple contains several attributes with missing values. It is especially poor when the percent- age of missing values per attribute varies considerably. By ignoring the tuple, we do not make use of the remaining attributes’ values in the tuple. Such data could have been useful to the task at hand. 2. Fill in the missing value manually: In general, this approach is time consuming and may not be feasible given a large data set with many missing values. 3. Use a global constant to fill in the missing value: Replace all missing attribute values by the same constant such as a label like “Unknown” or 1. If missing values are replaced by, say, “Unknown,” then the mining program may mistakenly think that they form an interesting concept, since they all have a value in common—that of “Unknown.” Hence, although this method is simple, it is not foolproof. 4. Use a measure of central tendency for the attribute (e.g., the mean or median) to fill in the missing value: Chapter 2 discussed measures of central tendency, which indicate the “middle” value of a data distribution. For normal (symmetric) data dis- tributions, the mean can be used, while skewed data distribution should employ the median (Section 2.2). For example, suppose that the data distribution regard- ing the income of AllElectronics customers is symmetric and that the mean income is $56,000. Use this value to replace the missing value for income. 5. Use the attribute mean or median for all samples belonging to the same class as the given tuple: For example, if classifying customers according to credit risk, we may replace the missing value with the mean income value for customers in the same credit risk category as that of the given tuple. If the data distribution for a given class is skewed, the median value is a better choice. 6. Use the most probable value to fill in the missing value: This may be determined with regression, inference-based tools using a Bayesian formalism, or decision tree HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 89 #7 3.2 Data Cleaning 89 induction. For example, using the other customer attributes in your data set, you may construct a decision tree to predict the missing values for income. Decision trees and Bayesian inference are described in detail in Chapters 8 and 9, respectively, while regression is introduced in Section 3.4.5. Methods 3 through 6 bias the data—the filled-in value may not be correct. Method 6, however, is a popular strategy. In comparison to the other methods, it uses the most information from the present data to predict missing values. By considering the other attributes’ values in its estimation of the missing value for income, there is a greater chance that the relationships between income and the other attributes are preserved. It is important to note that, in some cases, a missing value may not imply an error in the data! For example, when applying for a credit card, candidates may be asked to supply their driver’s license number. Candidates who do not have a driver’s license may naturally leave this field blank. Forms should allow respondents to specify values such as “not applicable.” Software routines may also be used to uncover other null values (e.g., “don’t know,” “?” or “none”). Ideally, each attribute should have one or more rules regarding the null condition. The rules may specify whether or not nulls are allowed and/or how such values should be handled or transformed. Fields may also be inten- tionally left blank if they are to be provided in a later step of the business process. Hence, although we can try our best to clean the data after it is seized, good database and data entry procedure design should help minimize the number of missing values or errors in the first place. 3.2.2 Noisy Data “What is noise?” Noise is a random error or variance in a measured variable. In Chapter 2, we saw how some basic statistical description techniques (e.g., boxplots and scatter plots), and methods of data visualization can be used to identify outliers, which may represent noise. Given a numeric attribute such as, say, price, how can we “smooth” out the data to remove the noise? Let’s look at the following data smoothing techniques. Binning: Binning methods smooth a sorted data value by consulting its “neighbor- hood,” that is, the values around it. The sorted values are distributed into a number of “buckets,” or bins. Because binning methods consult the neighborhood of values, they perform local smoothing. Figure 3.2 illustrates some binning techniques. In this example, the data for price are first sorted and then partitioned into equal-frequency bins of size 3 (i.e., each bin contains three values). In smoothing by bin means, each value in a bin is replaced by the mean value of the bin. For example, the mean of the values 4, 8, and 15 in Bin 1 is 9. Therefore, each original value in this bin is replaced by the value 9. Similarly, smoothing by bin medians can be employed, in which each bin value is replaced by the bin median. In smoothing by bin boundaries, the minimum and maximum values in a given bin are identified as the bin boundaries. Each bin value is then replaced by the closest boundary value. In general, the larger the width, the HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 90 #8 90 Chapter 3 Data Preprocessing Sorted data for price (in dollars): 4, 8, 15, 21, 21, 24, 25, 28, 34 Partition into (equal-frequency) bins: Bin 1: 4, 8, 15 Bin 2: 21, 21, 24 Bin 3: 25, 28, 34 Smoothing by bin means: Bin 1: 9, 9, 9 Bin 2: 22, 22, 22 Bin 3: 29, 29, 29 Smoothing by bin boundaries: Bin 1: 4, 4, 15 Bin 2: 21, 21, 24 Bin 3: 25, 25, 34 Figure 3.2 Binning methods for data smoothing. greater the effect of the smoothing. Alternatively, bins may be equal width, where the interval range of values in each bin is constant. Binning is also used as a discretization technique and is further discussed in Section 3.5. Regression: Data smoothing can also be done by regression, a technique that con- forms data values to a function. Linear regression involves finding the “best” line to fit two attributes (or variables) so that one attribute can be used to predict the other. Multiple linear regression is an extension of linear regression, where more than two attributes are involved and the data are fit to a multidimensional surface. Regression is further described in Section 3.4.5. Outlier analysis: Outliers may be detected by clustering, for example, where similar values are organized into groups, or “clusters.” Intuitively, values that fall outside of the set of clusters may be considered outliers (Figure 3.3). Chapter 12 is dedicated to the topic of outlier analysis. Many data smoothing methods are also used for data discretization (a form of data transformation) and data reduction. For example, the binning techniques described before reduce the number of distinct values per attribute. This acts as a form of data reduction for logic-based data mining methods, such as decision tree induction, which repeatedly makes value comparisons on sorted data. Concept hierarchies are a form of data discretization that can also be used for data smoothing. A concept hierarchy for price, for example, may map real price values into inexpensive, moderately priced, and expensive, thereby reducing the number of data values to be handled by the mining HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 91 #9 3.2 Data Cleaning 91 Figure 3.3 A 2-D customer data plot with respect to customer locations in a city, showing three data clusters. Outliers may be detected as values that fall outside of the cluster sets. process. Data discretization is discussed in Section 3.5. Some methods of classification (e.g., neural networks) have built-in data smoothing mechanisms. Classification is the topic of Chapters 8 and 9. 3.2.3 Data Cleaning as a Process Missing values, noise, and inconsistencies contribute to inaccurate data. So far, we have looked at techniques for handling missing data and for smoothing data. “But data clean- ing is a big job. What about data cleaning as a process? How exactly does one proceed in tackling this task? Are there any tools out there to help?” The first step in data cleaning as a process is discrepancy detection. Discrepancies can be caused by several factors, including poorly designed data entry forms that have many optional fields, human error in data entry, deliberate errors (e.g., respondents not want- ing to divulge information about themselves), and data decay (e.g., outdated addresses). Discrepancies may also arise from inconsistent data representations and inconsistent use of codes. Other sources of discrepancies include errors in instrumentation devices that record data and system errors. Errors can also occur when the data are (inadequately) used for purposes other than originally intended. There may also be inconsistencies due to data integration (e.g., where a given attribute can have different names in different databases).2 2 Dataintegration and the removal of redundant data that can result from such integration are further described in Section 3.3. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 92 #10 92 Chapter 3 Data Preprocessing “So, how can we proceed with discrepancy detection?” As a starting point, use any knowledge you may already have regarding properties of the data. Such knowledge or “data about data” is referred to as metadata. This is where we can make use of the know- ledge we gained about our data in Chapter 2. For example, what are the data type and domain of each attribute? What are the acceptable values for each attribute? The basic statistical data descriptions discussed in Section 2.2 are useful here to grasp data trends and identify anomalies. For example, find the mean, median, and mode values. Are the data symmetric or skewed? What is the range of values? Do all values fall within the expected range? What is the standard deviation of each attribute? Values that are more than two standard deviations away from the mean for a given attribute may be flagged as potential outliers. Are there any known dependencies between attributes? In this step, you may write your own scripts and/or use some of the tools that we discuss further later. From this, you may find noise, outliers, and unusual values that need investigation. As a data analyst, you should be on the lookout for the inconsistent use of codes and any inconsistent data representations (e.g., “2010/12/25” and “25/12/2010” for date). Field overloading is another error source that typically results when developers squeeze new attribute definitions into unused (bit) portions of already defined attributes (e.g., an unused bit of an attribute that has a value range that uses only, say, 31 out of 32 bits). The data should also be examined regarding unique rules, consecutive rules, and null rules. A unique rule says that each value of the given attribute must be different from all other values for that attribute. A consecutive rule says that there can be no miss- ing values between the lowest and highest values for the attribute, and that all values must also be unique (e.g., as in check numbers). A null rule specifies the use of blanks, question marks, special characters, or other strings that may indicate the null condition (e.g., where a value for a given attribute is not available), and how such values should be handled. As mentioned in Section 3.2.1, reasons for missing values may include (1) the person originally asked to provide a value for the attribute refuses and/or finds that the information requested is not applicable (e.g., a license number attribute left blank by nondrivers); (2) the data entry person does not know the correct value; or (3) the value is to be provided by a later step of the process. The null rule should specify how to record the null condition, for example, such as to store zero for numeric attributes, a blank for character attributes, or any other conventions that may be in use (e.g., entries like “don’t know” or “?” should be transformed to blank). There are a number of different commercial tools that can aid in the discrepancy detection step. Data scrubbing tools use simple domain knowledge (e.g., knowledge of postal addresses and spell-checking) to detect errors and make corrections in the data. These tools rely on parsing and fuzzy matching techniques when cleaning data from multiple sources. Data auditing tools find discrepancies by analyzing the data to discover rules and relationships, and detecting data that violate such conditions. They are variants of data mining tools. For example, they may employ statistical analysis to find correlations, or clustering to identify outliers. They may also use the basic statistical data descriptions presented in Section 2.2. Some data inconsistencies may be corrected manually using external references. For example, errors made at data entry may be corrected by performing a paper HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 93 #11 3.3 Data Integration 93 trace. Most errors, however, will require data transformations. That is, once we find discrepancies, we typically need to define and apply (a series of) transformations to correct them. Commercial tools can assist in the data transformation step. Data migration tools allow simple transformations to be specified such as to replace the string “gender” by “sex.” ETL (extraction/transformation/loading) tools allow users to specify transforms through a graphical user interface (GUI). These tools typically support only a restricted set of transforms so that, often, we may also choose to write custom scripts for this step of the data cleaning process. The two-step process of discrepancy detection and data transformation (to correct discrepancies) iterates. This process, however, is error-prone and time consuming. Some transformations may introduce more discrepancies. Some nested discrepancies may only be detected after others have been fixed. For example, a typo such as “20010” in a year field may only surface once all date values have been converted to a uniform format. Transformations are often done as a batch process while the user waits without feedback. Only after the transformation is complete can the user go back and check that no new anomalies have been mistakenly created. Typically, numerous iterations are required before the user is satisfied. Any tuples that cannot be automatically handled by a given transformation are typically written to a file without any explanation regarding the rea- soning behind their failure. As a result, the entire data cleaning process also suffers from a lack of interactivity. New approaches to data cleaning emphasize increased interactivity. Potter’s Wheel, for example, is a publicly available data cleaning tool that integrates discrepancy detec- tion and transformation. Users gradually build a series of transformations by composing and debugging individual transformations, one step at a time, on a spreadsheet-like interface. The transformations can be specified graphically or by providing examples. Results are shown immediately on the records that are visible on the screen. The user can choose to undo the transformations, so that transformations that introduced addi- tional errors can be “erased.” The tool automatically performs discrepancy checking in the background on the latest transformed view of the data. Users can gradually develop and refine transformations as discrepancies are found, leading to more effective and efficient data cleaning. Another approach to increased interactivity in data cleaning is the development of declarative languages for the specification of data transformation operators. Such work focuses on defining powerful extensions to SQL and algorithms that enable users to express data cleaning specifications efficiently. As we discover more about the data, it is important to keep updating the metadata to reflect this knowledge. This will help speed up data cleaning on future versions of the same data store. 3.3 Data Integration Data mining often requires data integration—the merging of data from multiple data stores. Careful integration can help reduce and avoid redundancies and inconsistencies HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 94 #12 94 Chapter 3 Data Preprocessing in the resulting data set. This can help improve the accuracy and speed of the subsequent data mining process. The semantic heterogeneity and structure of data pose great challenges in data inte- gration. How can we match schema and objects from different sources? This is the essence of the entity identification problem, described in Section 3.3.1. Are any attributes correlated? Section 3.3.2 presents correlation tests for numeric and nominal data. Tuple duplication is described in Section 3.3.3. Finally, Section 3.3.4 touches on the detection and resolution of data value conflicts. 3.3.1 Entity Identification Problem It is likely that your data analysis task will involve data integration, which combines data from multiple sources into a coherent data store, as in data warehousing. These sources may include multiple databases, data cubes, or flat files. There are a number of issues to consider during data integration. Schema integration and object matching can be tricky. How can equivalent real-world entities from multiple data sources be matched up? This is referred to as the entity identification problem. For example, how can the data analyst or the computer be sure that customer id in one database and cust number in another refer to the same attribute? Examples of metadata for each attribute include the name, meaning, data type, and range of values permitted for the attribute, and null rules for handling blank, zero, or null values (Section 3.2). Such metadata can be used to help avoid errors in schema integration. The metadata may also be used to help transform the data (e.g., where data codes for pay type in one database may be “H” and “S” but 1 and 2 in another). Hence, this step also relates to data cleaning, as described earlier. When matching attributes from one database to another during integration, special attention must be paid to the structure of the data. This is to ensure that any attribute functional dependencies and referential constraints in the source system match those in the target system. For example, in one system, a discount may be applied to the order, whereas in another system it is applied to each individual line item within the order. If this is not caught before integration, items in the target system may be improperly discounted. 3.3.2 Redundancy and Correlation Analysis Redundancy is another important issue in data integration. An attribute (such as annual revenue, for instance) may be redundant if it can be “derived” from another attribute or set of attributes. Inconsistencies in attribute or dimension naming can also cause redundancies in the resulting data set. Some redundancies can be detected by correlation analysis. Given two attributes, such analysis can measure how strongly one attribute implies the other, based on the available data. For nominal data, we use the 2 (chi-square) test. For numeric attributes, we can use the correlation coefficient and covariance, both of which access how one attribute’s values vary from those of another. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 95 #13 3.3 Data Integration 95 2 Correlation Test for Nominal Data For nominal data, a correlation relationship between two attributes, A and B, can be discovered by a 2 (chi-square) test. Suppose A has c distinct values, namely a1 , a2 ,... ac. B has r distinct values, namely b1 , b2 ,... br. The data tuples described by A and B can be shown as a contingency table, with the c values of A making up the columns and the r values of B making up the rows. Let (Ai , Bj ) denote the joint event that attribute A takes on value ai and attribute B takes on value bj , that is, where (A = ai , B = bj ). Each and every possible (Ai , Bj ) joint event has its own cell (or slot) in the table. The 2 value (also known as the Pearson 2 statistic) is computed as c X r 2 X (oij eij )2 = , (3.1) eij i=1 j=1 where oij is the observed frequency (i.e., actual count) of the joint event (Ai , Bj ) and eij is the expected frequency of (Ai , Bj ), which can be computed as count(A = ai ) ⇥ count(B = bj ) eij = , (3.2) n where n is the number of data tuples, count(A = ai ) is the number of tuples having value ai for A, and count(B = bj ) is the number of tuples having value bj for B. The sum in Eq. (3.1) is computed over all of the r ⇥ c cells. Note that the cells that contribute the most to the 2 value are those for which the actual count is very different from that expected. The 2 statistic tests the hypothesis that A and B are independent, that is, there is no correlation between them. The test is based on a significance level, with (r 1) ⇥ (c 1) degrees of freedom. We illustrate the use of this statistic in Example 3.1. If the hypothesis can be rejected, then we say that A and B are statistically correlated. Example 3.1 Correlation analysis of nominal attributes using 2. Suppose that a group of 1500 people was surveyed. The gender of each person was noted. Each person was polled as to whether his or her preferred type of reading material was fiction or nonfiction. Thus, we have two attributes, gender and preferred reading. The observed frequency (or count) of each possible joint event is summarized in the contingency table shown in Table 3.1, where the numbers in parentheses are the expected frequencies. The expected frequen- cies are calculated based on the data distribution for both attributes using Eq. (3.2). Using Eq. (3.2), we can verify the expected frequencies for each cell. For example, the expected frequency for the cell (male, fiction) is count(male) ⇥ count( fiction) 300 ⇥ 450 e11 = = = 90, n 1500 and so on. Notice that in any row, the sum of the expected frequencies must equal the total observed frequency for that row, and the sum of the expected frequencies in any column must also equal the total observed frequency for that column. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 96 #14 96 Chapter 3 Data Preprocessing Table 3.1 Example 2.1’s 2 ⇥ 2 Contingency Table Data male female Total fiction 250 (90) 200 (360) 450 non fiction 50 (210) 1000 (840) 1050 Total 300 1200 1500 Note: Are gender and preferred reading correlated? Using Eq. (3.1) for 2 computation, we get 2 (250 90)2 210)2 (200 360)2 (1000 840)2 (50 = + + + 90 210 360 840 = 284.44 + 121.90 + 71.11 + 30.48 = 507.93. For this 2 ⇥ 2 table, the degrees of freedom are (2 1)(2 1) = 1. For 1 degree of free- dom, the 2 value needed to reject the hypothesis at the 0.001 significance level is 10.828 (taken from the table of upper percentage points of the 2 distribution, typically avail- able from any textbook on statistics). Since our computed value is above this, we can reject the hypothesis that gender and preferred reading are independent and conclude that the two attributes are (strongly) correlated for the given group of people. Correlation Coefficient for Numeric Data For numeric attributes, we can evaluate the correlation between two attributes, A and B, by computing the correlation coefficient (also known as Pearson’s product moment coefficient, named after its inventer, Karl Pearson). This is n X n X (ai Ā)(bi B̄) (ai bi ) nĀB̄ i=1 i=1 rA,B = = , (3.3) n A B n A B where n is the number of tuples, ai and bi are the respective values of A and B in tuple i, Ā and B̄ are the respective mean values of A and B, A and B are the respective standard deviations of A and B (as defined in Section 2.2.2), and 6(ai bi ) is the sum of the AB cross-product (i.e., for each tuple, the value for A is multiplied by the value for B in that tuple). Note that 1 rA,B +1. If rA,B is greater than 0, then A and B are positively correlated, meaning that the values of A increase as the values of B increase. The higher the value, the stronger the correlation (i.e., the more each attribute implies the other). Hence, a higher value may indicate that A (or B) may be removed as a redundancy. If the resulting value is equal to 0, then A and B are independent and there is no correlation between them. If the resulting value is less than 0, then A and B are negatively correlated, where the values of one attribute increase as the values of the other attribute decrease. This means that each attribute discourages the other. Scatter plots can also be used to view correlations between attributes (Section 2.2.3). For example, Figure 2.8’s HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 97 #15 3.3 Data Integration 97 scatter plots respectively show positively correlated data and negatively correlated data, while Figure 2.9 displays uncorrelated data. Note that correlation does not imply causality. That is, if A and B are correlated, this does not necessarily imply that A causes B or that B causes A. For example, in analyzing a demographic database, we may find that attributes representing the number of hospitals and the number of car thefts in a region are correlated. This does not mean that one causes the other. Both are actually causally linked to a third attribute, namely, population. Covariance of Numeric Data In probability theory and statistics, correlation and covariance are two similar measures for assessing how much two attributes change together. Consider two numeric attributes A and B, and a set of n observations {(a1 , b1 ),... , (an , bn )}. The mean values of A and B, respectively, are also known as the expected values on A and B, that is, Pn ai E(A) = Ā = i=1 n and Pn i=1 bi E(B) = B̄ =. n The covariance between A and B is defined as Pn i=1 (ai Ā)(bi B̄) Cov(A, B) = E((A Ā)(B B̄)) =. (3.4) n If we compare Eq. (3.3) for rA,B (correlation coefficient) with Eq. (3.4) for covariance, we see that Cov(A, B) rA,B = , (3.5) A B where A and B are the standard deviations of A and B, respectively. It can also be shown that Cov(A, B) = E(A · B) ĀB̄. (3.6) This equation may simplify calculations. For two attributes A and B that tend to change together, if A is larger than Ā (the expected value of A), then B is likely to be larger than B̄ (the expected value of B). Therefore, the covariance between A and B is positive. On the other hand, if one of the attributes tends to be above its expected value when the other attribute is below its expected value, then the covariance of A and B is negative. If A and B are independent (i.e., they do not have correlation), then E(A · B) = E(A) · E(B). Therefore, the covariance is Cov(A, B) = E(A · B) ĀB̄ = E(A) · E(B) ĀB̄ = 0. However, the converse is not true. Some pairs of random variables (attributes) may have a covariance of 0 but are not independent. Only under some additional assumptions HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 98 #16 98 Chapter 3 Data Preprocessing Table 3.2 Stock Prices for AllElectronics and HighTech Time point AllElectronics HighTech t1 6 20 t2 5 10 t3 4 14 t4 3 5 t5 2 5 (e.g., the data follow multivariate normal distributions) does a covariance of 0 imply independence. Example 3.2 Covariance analysis of numeric attributes. Consider Table 3.2, which presents a sim- plified example of stock prices observed at five time points for AllElectronics and HighTech, a high-tech company. If the stocks are affected by the same industry trends, will their prices rise or fall together? 6 + 5 + 4 + 3 + 2 20 E(AllElectronics) = = = $4 5 5 and 20 + 10 + 14 + 5 + 5 54 E(HighTech) = = = $10.80. 5 5 Thus, using Eq. (3.4), we compute 6 ⇥ 20 + 5 ⇥ 10 + 4 ⇥ 14 + 3 ⇥ 5 + 2 ⇥ 5 Cov(AllElectroncis, HighTech) = 4 ⇥ 10.80 5 = 50.2 43.2 = 7. Therefore, given the positive covariance we can say that stock prices for both companies rise together. Variance is a special case of covariance, where the two attributes are identical (i.e., the covariance of an attribute with itself). Variance was discussed in Chapter 2. 3.3.3 Tuple Duplication In addition to detecting redundancies between attributes, duplication should also be detected at the tuple level (e.g., where there are two or more identical tuples for a given unique data entry case). The use of denormalized tables (often done to improve per- formance by avoiding joins) is another source of data redundancy. Inconsistencies often arise between various duplicates, due to inaccurate data entry or updating some but not all data occurrences. For example, if a purchase order database contains attributes for HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 99 #17 3.4 Data Reduction 99 the purchaser’s name and address instead of a key to this information in a purchaser database, discrepancies can occur, such as the same purchaser’s name appearing with different addresses within the purchase order database. 3.3.4 Data Value Conflict Detection and Resolution Data integration also involves the detection and resolution of data value conflicts. For example, for the same real-world entity, attribute values from different sources may dif- fer. This may be due to differences in representation, scaling, or encoding. For instance, a weight attribute may be stored in metric units in one system and British imperial units in another. For a hotel chain, the price of rooms in different cities may involve not only different currencies but also different services (e.g., free breakfast) and taxes. When exchanging information between schools, for example, each school may have its own curriculum and grading scheme. One university may adopt a quarter system, offer three courses on database systems, and assign grades from A+ to F, whereas another may adopt a semester system, offer two courses on databases, and assign grades from 1 to 10. It is difficult to work out precise course-to-grade transformation rules between the two universities, making information exchange difficult. Attributes may also differ on the abstraction level, where an attribute in one sys- tem is recorded at, say, a lower abstraction level than the “same” attribute in another. For example, the total sales in one database may refer to one branch of All Electronics, while an attribute of the same name in another database may refer to the total sales for All Electronics stores in a given region. The topic of discrepancy detection is further described in Section 3.2.3 on data cleaning as a process. 3.4 Data Reduction Imagine that you have selected data from the AllElectronics data warehouse for analysis. The data set will likely be huge! Complex data analysis and mining on huge amounts of data can take a long time, making such analysis impractical or infeasible. Data reduction techniques can be applied to obtain a reduced representation of the data set that is much smaller in volume, yet closely maintains the integrity of the original data. That is, mining on the reduced data set should be more efficient yet produce the same (or almost the same) analytical results. In this section, we first present an overview of data reduction strategies, followed by a closer look at individual techniques. 3.4.1 Overview of Data Reduction Strategies Data reduction strategies include dimensionality reduction, numerosity reduction, and data compression. Dimensionality reduction is the process of reducing the number of random variables or attributes under consideration. Dimensionality reduction methods include wavelet HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 100 #18 100 Chapter 3 Data Preprocessing transforms (Section 3.4.2) and principal components analysis (Section 3.4.3), which transform or project the original data onto a smaller space. Attribute subset selection is a method of dimensionality reduction in which irrelevant, weakly relevant, or redundant attributes or dimensions are detected and removed (Section 3.4.4). Numerosity reduction techniques replace the original data volume by alternative, smaller forms of data representation. These techniques may be parametric or non- parametric. For parametric methods, a model is used to estimate the data, so that typically only the data parameters need to be stored, instead of the actual data. (Out- liers may also be stored.) Regression and log-linear models (Section 3.4.5) are examples. Nonparametric methods for storing reduced representations of the data include his- tograms (Section 3.4.6), clustering (Section 3.4.7), sampling (Section 3.4.8), and data cube aggregation (Section 3.4.9). In data compression, transformations are applied so as to obtain a reduced or “com- pressed” representation of the original data. If the original data can be reconstructed from the compressed data without any information loss, the data reduction is called lossless. If, instead, we can reconstruct only an approximation of the original data, then the data reduction is called lossy. There are several lossless algorithms for string com- pression; however, they typically allow only limited data manipulation. Dimensionality reduction and numerosity reduction techniques can also be considered forms of data compression. There are many other ways of organizing methods of data reduction. The computa- tional time spent on data reduction should not outweigh or “erase” the time saved by mining on a reduced data set size. 3.4.2 Wavelet Transforms The discrete wavelet transform (DWT) is a linear signal processing technique that, when applied to a data vector X, transforms it to a numerically different vector, X0 , of wavelet coefficients. The two vectors are of the same length. When applying this tech- nique to data reduction, we consider each tuple as an n-dimensional data vector, that is, X = (x1 , x2 ,... , xn ), depicting n measurements made on the tuple from n database attributes.3 “How can this technique be useful for data reduction if the wavelet transformed data are of the same length as the original data?” The usefulness lies in the fact that the wavelet transformed data can be truncated. A compressed approximation of the data can be retained by storing only a small fraction of the strongest of the wavelet coefficients. For example, all wavelet coefficients larger than some user-specified threshold can be retained. All other coefficients are set to 0. The resulting data representation is therefore very sparse, so that operations that can take advantage of data sparsity are computa- tionally very fast if performed in wavelet space. The technique also works to remove noise without smoothing out the main features of the data, making it effective for data 3 In our notation, any variable representing a vector is shown in bold italic font; measurements depicting the vector are shown in italic font. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 101 #19 3.4 Data Reduction 101 cleaning as well. Given a set of coefficients, an approximation of the original data can be constructed by applying the inverse of the DWT used. The DWT is closely related to the discrete Fourier transform (DFT), a signal process- ing technique involving sines and cosines. In general, however, the DWT achieves better lossy compression. That is, if the same number of coefficients is retained for a DWT and a DFT of a given data vector, the DWT version will provide a more accurate approxima- tion of the original data. Hence, for an equivalent approximation, the DWT requires less space than the DFT. Unlike the DFT, wavelets are quite localized in space, contributing to the conservation of local detail. There is only one DFT, yet there are several families of DWTs. Figure 3.4 shows some wavelet families. Popular wavelet transforms include the Haar-2, Daubechies-4, and Daubechies-6. The general procedure for applying a discrete wavelet transform uses a hierarchical pyramid algorithm that halves the data at each iteration, resulting in fast computational speed. The method is as follows: 1. The length, L, of the input data vector must be an integer power of 2. This condition can be met by padding the data vector with zeros as necessary (L n). 2. Each transform involves applying two functions. The first applies some data smooth- ing, such as a sum or weighted average. The second performs a weighted difference, which acts to bring out the detailed features of the data. 3. The two functions are applied to pairs of data points in X, that is, to all pairs of measurements (x2i , x2i+1 ). This results in two data sets of length L/2. In general, these represent a smoothed or low-frequency version of the input data and the high- frequency content of it, respectively. 4. The two functions are recursively applied to the data sets obtained in the previous loop, until the resulting data sets obtained are of length 2. 5. Selected values from the data sets obtained in the previous iterations are designated the wavelet coefficients of the transformed data. 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 !1.0 !0.5 0.0 0.5 1.0 1.5 2.0 0 2 4 6 (a) Haar-2 (b) Daubechies-4 Figure 3.4 Examples of wavelet families. The number next to a wavelet name is the number of vanishing moments of the wavelet. This is a set of mathematical relationships that the coefficients must satisfy and is related to the number of coefficients. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 102 #20 102 Chapter 3 Data Preprocessing Equivalently, a matrix multiplication can be applied to the input data in order to obtain the wavelet coefficients, where the matrix used depends on the given DWT. The matrix must be orthonormal, meaning that the columns are unit vectors and are mutu- ally orthogonal, so that the matrix inverse is just its transpose. Although we do not have room to discuss it here, this property allows the reconstruction of the data from the smooth and smooth-difference data sets. By factoring the matrix used into a product of a few sparse matrices, the resulting “fast DWT” algorithm has a complexity of O(n) for an input vector of length n. Wavelet transforms can be applied to multidimensional data such as a data cube. This is done by first applying the transform to the first dimension, then to the second, and so on. The computational complexity involved is linear with respect to the number of cells in the cube. Wavelet transforms give good results on sparse or skewed data and on data with ordered attributes. Lossy compression by wavelets is reportedly better than JPEG compression, the current commercial standard. Wavelet transforms have many real- world applications, including the compression of fingerprint images, computer vision, analysis of time-series data, and data cleaning. 3.4.3 Principal Components Analysis In this subsection we provide an intuitive introduction to principal components analy- sis as a method of dimesionality reduction. A detailed theoretical explanation is beyond the scope of this book. For additional references, please see the bibliographic notes (Section 3.8) at the end of this chapter. Suppose that the data to be reduced consist of tuples or data vectors described by n attributes or dimensions. Principal components analysis (PCA; also called the Karhunen-Loeve, or K-L, method) searches for k n-dimensional orthogonal vectors that can best be used to represent the data, where k n. The original data are thus projected onto a much smaller space, resulting in dimensionality reduction. Unlike attribute sub- set selection (Section 3.4.4), which reduces the attribute set size by retaining a subset of the initial set of attributes, PCA “combines” the essence of attributes by creating an alter- native, smaller set of variables. The initial data can then be projected onto this smaller set. PCA often reveals relationships that were not previously suspected and thereby allows interpretations that would not ordinarily result. The basic procedure is as follows: 1. The input data are normalized, so that each attribute falls within the same range. This step helps ensure that attributes with large domains will not dominate attributes with smaller domains. 2. PCA computes k orthonormal vectors that provide a basis for the normalized input data. These are unit vectors that each point in a direction perpendicular to the others. These vectors are referred to as the principal components. The input data are a linear combination of the principal components. 3. The principal components are sorted in order of decreasing “significance” or strength. The principal components essentially serve as a new set of axes for the data, HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 103 #21 3.4 Data Reduction 103 X2 Y2 Y1 X1 Figure 3.5 Principal components analysis. Y1 and Y2 are the first two principal components for the given data. providing important information about variance. That is, the sorted axes are such that the first axis shows the most variance among the data, the second axis shows the next highest variance, and so on. For example, Figure 3.5 shows the first two princi- pal components, Y1 and Y2 , for the given set of data originally mapped to the axes X1 and X2. This information helps identify groups or patterns within the data. 4. Because the components are sorted in decreasing order of “significance,” the data size can be reduced by eliminating the weaker components, that is, those with low vari- ance. Using the strongest principal components, it should be possible to reconstruct a good approximation of the original data. PCA can be applied to ordered and unordered attributes, and can handle sparse data and skewed data. Multidimensional data of more than two dimensions can be han- dled by reducing the problem to two dimensions. Principal components may be used as inputs to multiple regression and cluster analysis. In comparison with wavelet trans- forms, PCA tends to be better at handling sparse data, whereas wavelet transforms are more suitable for data of high dimensionality. 3.4.4 Attribute Subset Selection Data sets for analysis may contain hundreds of attributes, many of which may be irrel- evant to the mining task or redundant. For example, if the task is to classify customers based on whether or not they are likely to purchase a popular new CD at AllElectronics when notified of a sale, attributes such as the customer’s telephone number are likely to be irrelevant, unlike attributes such as age or music taste. Although it may be possible for a domain expert to pick out some of the useful attributes, this can be a difficult and time- consuming task, especially when the data’s behavior is not well known. (Hence, a reason behind its analysis!) Leaving out relevant attributes or keeping irrelevant attributes may be detrimental, causing confusion for the mining algorithm employed. This can result in discovered patterns of poor quality. In addition, the added volume of irrelevant or redundant attributes can slow down the mining process. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 104 #22 104 Chapter 3 Data Preprocessing Attribute subset selection4 reduces the data set size by removing irrelevant or redundant attributes (or dimensions). The goal of attribute subset selection is to find a minimum set of attributes such that the resulting probability distribution of the data classes is as close as possible to the original distribution obtained using all attributes. Mining on a reduced set of attributes has an additional benefit: It reduces the number of attributes appearing in the discovered patterns, helping to make the patterns easier to understand. “How can we find a ‘good’ subset of the original attributes?” For n attributes, there are n 2 possible subsets. An exhaustive search for the optimal subset of attributes can be pro- hibitively expensive, especially as n and the number of data classes increase. Therefore, heuristic methods that explore a reduced search space are commonly used for attribute subset selection. These methods are typically greedy in that, while searching through attribute space, they always make what looks to be the best choice at the time. Their strategy is to make a locally optimal choice in the hope that this will lead to a globally optimal solution. Such greedy methods are effective in practice and may come close to estimating an optimal solution. The “best” (and “worst”) attributes are typically determined using tests of statistical significance, which assume that the attributes are independent of one another. Many other attribute evaluation measures can be used such as the information gain measure used in building decision trees for classification.5 Basic heuristic methods of attribute subset selection include the techniques that follow, some of which are illustrated in Figure 3.6. Forward selection Backward elimination Decision tree induction Initial attribute set: Initial attribute set: Initial attribute set: {A1, A2, A3, A4, A5, A6} {A1, A2, A3, A4, A5, A6} {A1, A2, A3, A4, A5, A6} Initial reduced set: => {A1, A3, A4, A5, A6} {} => {A1, A4, A5, A6} A 4? => {A1} => Reduced attribute set: => {A1, A4} {A1, A4, A6} Y N => Reduced attribute set: {A1, A4, A6} A1? A6? Y N Y N Class 1 Class 2 Class 1 Class 2 => Reduced attribute set: {A1, A4, A6} Figure 3.6 Greedy (heuristic) methods for attribute subset selection. 4 In machine learning, attribute subset selection is known as feature subset selection. 5 The information gain measure is described in detail in Chapter 8. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 105 #23 3.4 Data Reduction 105 1. Stepwise forward selection: The procedure starts with an empty set of attributes as the reduced set. The best of the original attributes is determined and added to the reduced set. At each subsequent iteration or step, the best of the remaining original attributes is added to the set. 2. Stepwise backward elimination: The procedure starts with the full set of attributes. At each step, it removes the worst attribute remaining in the set. 3. Combination of forward selection and backward elimination: The stepwise for- ward selection and backward elimination methods can be combined so that, at each step, the procedure selects the best attribute and removes the worst from among the remaining attributes. 4. Decision tree induction: Decision tree algorithms (e.g., ID3, C4.5, and CART) were originally intended for classification. Decision tree induction constructs a flowchart- like structure where each internal (nonleaf) node denotes a test on an attribute, each branch corresponds to an outcome of the test, and each external (leaf) node denotes a class prediction. At each node, the algorithm chooses the “best” attribute to partition the data into individual classes. When decision tree induction is used for attribute subset selection, a tree is con- structed from the given data. All attributes that do not appear in the tree are assumed to be irrelevant. The set of attributes appearing in the tree form the reduced subset of attributes. The stopping criteria for the methods may vary. The procedure may employ a threshold on the measure used to determine when to stop the attribute selection process. In some cases, we may want to create new attributes based on others. Such attribute construction6 can help improve accuracy and understanding of structure in high- dimensional data. For example, we may wish to add the attribute area based on the attributes height and width. By combining attributes, attribute construction can dis- cover missing information about the relationships between data attributes that can be useful for knowledge discovery. 3.4.5 Regression and Log-Linear Models: Parametric Data Reduction Regression and log-linear models can be used to approximate the given data. In (simple) linear regression, the data are modeled to fit a straight line. For example, a random variable, y (called a response variable), can be modeled as a linear function of another random variable, x (called a predictor variable), with the equation y = wx + b, (3.7) where the variance of y is assumed to be constant. In the context of data mining, x and y are numeric database attributes. The coefficients, w and b (called regression coefficients), 6 In the machine learning literature, attribute construction is known as feature construction. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 106 #24 106 Chapter 3 Data Preprocessing specify the slope of the line and the y-intercept, respectively. These coefficients can be solved for by the method of least squares, which minimizes the error between the actual line separating the data and the estimate of the line. Multiple linear regression is an extension of (simple) linear regression, which allows a response variable, y, to be modeled as a linear function of two or more predictor variables. Log-linear models approximate discrete multidimensional probability distributions. Given a set of tuples in n dimensions (e.g., described by n attributes), we can con- sider each tuple as a point in an n-dimensional space. Log-linear models can be used to estimate the probability of each point in a multidimensional space for a set of dis- cretized attributes, based on a smaller subset of dimensional combinations. This allows a higher-dimensional data space to be constructed from lower-dimensional spaces. Log-linear models are therefore also useful for dimensionality reduction (since the lower-dimensional points together typically occupy less space than the original data points) and data smoothing (since aggregate estimates in the lower-dimensional space are less subject to sampling variations than the estimates in the higher-dimensional space). Regression and log-linear models can both be used on sparse data, although their application may be limited. While both methods can handle skewed data, regression does exceptionally well. Regression can be computationally intensive when applied to high-dimensional data, whereas log-linear models show good scalability for up to 10 or so dimensions. Several software packages exist to solve regression problems. Examples include SAS (www.sas.com), SPSS (www.spss.com), and S-Plus (www.insightful.com). Another useful resource is the book Numerical Recipes in C, by Press, Teukolsky, Vetterling, and Flannery [PTVF07], and its associated source code. 3.4.6 Histograms Histograms use binning to approximate data distributions and are a popular form of data reduction. Histograms were introduced in Section 2.2.3. A histogram for an attribute, A, partitions the data distribution of A into disjoint subsets, referred to as buckets or bins. If each bucket represents only a single attribute–value/frequency pair, the buckets are called singleton buckets. Often, buckets instead represent continuous ranges for the given attribute. Example 3.3 Histograms. The following data are a list of AllElectronics prices for commonly sold items (rounded to the nearest dollar). The numbers have been sorted: 1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18, 18, 18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25, 28, 28, 30, 30, 30. Figure 3.7 shows a histogram for the data using singleton buckets. To further reduce the data, it is common to have each bucket denote a continuous value range for the given attribute. In Figure 3.8, each bucket represents a different $10 range for price. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 107 #25 3.4 Data Reduction 107 10 9 8 7 6 count 5 4 3 2 1 0 5 10 15 20 25 30 price ($) Figure 3.7 A histogram for price using singleton buckets—each bucket represents one price–value/ frequency pair. 25 20 15 count 10 5 0 1–10 11–20 21–30 price ($) Figure 3.8 An equal-width histogram for price, where values are aggregated so that each bucket has a uniform width of $10. “How are the buckets determined and the attribute values partitioned?” There are several partitioning rules, including the following: Equal-width: In an equal-width histogram, the width of each bucket range is uniform (e.g., the width of $10 for the buckets in Figure 3.8). Equal-frequency (or equal-depth): In an equal-frequency histogram, the buckets are created so that, roughly, the frequency of each bucket is constant (i.e., each bucket contains roughly the same number of contiguous data samples). HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 108 #26 108 Chapter 3 Data Preprocessing Histograms are highly effective at approximating both sparse and dense data, as well as highly skewed and uniform data. The histograms described before for single attributes can be extended for multiple attributes. Multidimensional histograms can cap- ture dependencies between attributes. These histograms have been found effective in approximating data with up to five attributes. More studies are needed regarding the effectiveness of multidimensional histograms for high dimensionalities. Singleton buckets are useful for storing high-frequency outliers. 3.4.7 Clustering Clustering techniques consider data tuples as objects. They partition the objects into groups, or clusters, so that objects within a cluster are “similar” to one another and “dis- similar” to objects in other clusters. Similarity is commonly defined in terms of how “close” the objects are in space, based on a distance function. The “quality” of a cluster may be represented by its diameter, the maximum distance between any two objects in the cluster. Centroid distance is an alternative measure of cluster quality and is defined as the average distance of each cluster object from the cluster centroid (denoting the “average object,” or average point in space for the cluster). Figure 3.3 showed a 2-D plot of customer data with respect to customer locations in a city. Three data clusters are visible. In data reduction, the cluster representations of the data are used to replace the actual data. The effectiveness of this technique depends on the data’s nature. It is much more effective for data that can be organized into distinct clusters than for smeared data. There are many measures for defining clusters and cluster quality. Clustering meth- ods are further described in Chapters 10 and 11. 3.4.8 Sampling Sampling can be used as a data reduction technique because it allows a large data set to be represented by a much smaller random data sample (or subset). Suppose that a large data set, D, contains N tuples. Let’s look at the most common ways that we could sample D for data reduction, as illustrated in Figure 3.9. Simple random sample without replacement (SRSWOR) of size s: This is created by drawing s of the N tuples from D (s < N ), where the probability of drawing any tuple in D is 1/N , that is, all tuples are equally likely to be sampled. Simple random sample with replacement (SRSWR) of size s: This is similar to SRSWOR, except that each time a tuple is drawn from D, it is recorded and then replaced. That is, after a tuple is drawn, it is placed back in D so that it may be drawn again. Cluster sample: If the tuples in D are grouped into M mutually disjoint “clusters,” then an SRS of s clusters can be obtained, where s < M. For example, tuples in a database are usually retrieved a page at a time, so that each page can be considered HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 109 #27 3.4 Data Reduction 109 Cluster sample Startified sample Figure 3.9 Sampling can be used for data reduction. a cluster. A reduced data representation can be obtained by applying, say, SRSWOR to the pages, resulting in a cluster sample of the tuples. Other clustering criteria con- veying rich semantics can also be explored. For example, in a spatial database, we may choose to define clusters geographically based on how closely different areas are located. Stratified sample: If D is divided into mutually disjoint parts called strata, a stratified sample of D is generated by obtaining an SRS at each stratum. This helps ensure a HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 110 #28 110 Chapter 3 Data Preprocessing representative sample, especially when the data are skewed. For example, a stratified sample may be obtained from customer data, where a stratum is created for each cus- tomer age group. In this way, the age group having the smallest number of customers will be sure to be represented. An advantage of sampling for data reduction is that the cost of obtaining a sample is proportional to the size of the sample, s, as opposed to N , the data set size. Hence, sampling complexity is potentially sublinear to the size of the data. Other data reduc- tion techniques can require at least one complete pass through D. For a fixed sample size, sampling complexity increases only linearly as the number of data dimensions, n, increases, whereas techniques using histograms, for example, increase exponentially in n. When applied to data reduction, sampling is most commonly used to estimate the answer to an aggregate query. It is possible (using the central limit theorem) to deter- mine a sufficient sample size for estimating a given function within a specified degree of error. This sample size, s, may be extremely small in comparison to N. Sampling is a natural choice for the progressive refinement of a reduced data set. Such a set can be further refined by simply increasing the sample size. 3.4.9 Data Cube Aggregation Imagine that you have collected the data for your analysis. These data consist of the AllElectronics sales per quarter, for the years 2008 to 2010. You are, however, interested in the annual sales (total per year), rather than the total per quarter. Thus, the data can be aggregated so that the resulting data summarize the total sales per year instead of per quarter. This aggregation is illustrated in Figure 3.10. The resulting data set is smaller in volume, without loss of information necessary for the analysis task. Data cubes are discussed in detail in Chapter 4 on data warehousing and Chapter 5 on data cube technology. We briefly introduce some concepts here. Data cubes store Year 2010 Quarter Sales Year Q1 2009 $224,000 Q2 Quarter $408,000 Sales Q32008 $350,000 Year Q1Q4 $224,000 $586,000 Q2 Quarter $408,000 Sales Year Sales Q3 $350,000 Q1Q4 $224,000 $586,000 2008 $1,568,000 Q2 $408,000 2009 $2,356,000 Q3 $350,000 2010 $3,594,000 Q4 $586,000 Figure 3.10 Sales data for a given branch of AllElectronics for the years 2008 through 2010. On the left, the sales are shown per quarter. On the right, the data are aggregated to provide the annual sales. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 111 #29 3.5 Data Transformation and Data Discretization 111 D n ch ra C b B A home 568 entertainment item_type computer 750 phone 150 security 50 2008 2009 2010 year Figure 3.11 A data cube for sales at AllElectronics. multidimensional aggregated information. For example, Figure 3.11 shows a data cube for multidimensional analysis of sales data with respect to annual sales per item type for each AllElectronics branch. Each cell holds an aggregate data value, corresponding to the data point in multidimensional space. (For readability, only some cell values are shown.) Concept hierarchies may exist for each attribute, allowing the analysis of data at multiple abstraction levels. For example, a hierarchy for branch could allow branches to be grouped into regions, based on their address. Data cubes provide fast access to precomputed, summarized data, thereby benefiting online analytical processing as well as data mining. The cube created at the lowest abstraction level is referred to as the base cuboid. The base cuboid should correspond to an individual entity of interest such as sales or cus- tomer. In other words, the lowest level should be usable, or useful for the analysis. A cube at the highest level of abstraction is the apex cuboid. For the sales data in Figure 3.11, the apex cuboid would give one total—the total sales for all three years, for all item types, and for all branches. Data cubes created for varying levels of abstraction are often referred to as cuboids, so that a data cube may instead refer to a lattice of cuboids. Each higher abstraction level further reduces the resulting data size. When replying to data mining requests, the smallest available cuboid relevant to the given task should be used. This issue is also addressed in Chapter 4. 3.5 Data Transformation and Data Discretization This section presents methods of data transformation. In this preprocessing step, the data are transformed or consolidated so that the resulting mining process may be more efficient, and the patterns found may be easier to understand. Data discretization, a form of data transformation, is also discussed. HAN 10-ch03-083-124-9780123814791 2011/6/1 3:16 Page 112 #30 112 Chapter 3 Data Preprocessing 3.5.1 Data Transformation Strategies Overview In data transformat