Fundamentals of Database Systems PDF
Document Details
Tags
Summary
This textbook on Database Systems covers fundamental concepts such as database architectures, including the client/server model, data models including schemas and instances, database state and different categories of data models. It also discusses various levels of data abstraction in the context of data models.
Full Transcript
chapter 2 Database System Concepts and Architecture T he architecture of DBMS packages has evolved from the early monolithic systems, w...
chapter 2 Database System Concepts and Architecture T he architecture of DBMS packages has evolved from the early monolithic systems, where the whole DBMS software package was one tightly integrated system, to the modern DBMS packages that are modular in design, with a client/server system architecture. The recent growth in the amount of data requiring storage has led to database systems with distributed architectures comprised of thousands of computers that manage the data stores. This evolution mirrors the trends in computing, where large cen- tralized mainframe computers are replaced by hundreds of distributed worksta- tions and personal computers connected via communications networks to various types of server machines—Web servers, database servers, file servers, application servers, and so on. The current cloud computing environments consist of thou- sands of large servers managing so-called big data for users on the Web. In a basic client/server DBMS architecture, the system functionality is distributed between two types of modules.1 A client module is typically designed so that it will run on a mobile device, user workstation, or personal computer (PC). Typi- cally, application programs and user interfaces that access the database run in the client module. Hence, the client module handles user interaction and provides the user-friendly interfaces such as apps for mobile devices, or forms- or menu- based GUIs (graphical user interfaces) for PCs. The other kind of module, called a server module, typically handles data storage, access, search, and other func- tions. We discuss client/server architectures in more detail in Section 2.5. First, we must study more basic concepts that will give us a better understanding of modern database architectures. 1 As we shall see in Section 2.5, there are variations on this simple two-tier client/server architecture. 31 32 Chapter 2 Database System Concepts and Architecture In this chapter we present the terminology and basic concepts that will be used throughout the text. Section 2.1 discusses data models and defines the concepts of schemas and instances, which are fundamental to the study of database sys- tems. We discuss the three-schema DBMS architecture and data independence in Section 2.2; this provides a user’s perspective on what a DBMS is supposed to do. In Section 2.3 we describe the types of interfaces and languages that are typi- cally provided by a DBMS. Section 2.4 discusses the database system software environment. Section 2.5 gives an overview of various types of client/server architectures. Finally, Section 2.6 presents a classification of the types of DBMS packages. Section 2.7 summarizes the chapter. The material in Sections 2.4 through 2.6 provides detailed concepts that may be considered as supplementary to the basic introductory material. 2.1 Data Models, Schemas, and Instances One fundamental characteristic of the database approach is that it provides some level of data abstraction. Data abstraction generally refers to the suppression of details of data organization and storage, and the highlighting of the essential fea- tures for an improved understanding of data. One of the main characteristics of the database approach is to support data abstraction so that different users can perceive data at their preferred level of detail. A data model—a collection of concepts that can be used to describe the structure of a database—provides the necessary means to achieve this abstraction.2 By structure of a database we mean the data types, rela- tionships, and constraints that apply to the data. Most data models also include a set of basic operations for specifying retrievals and updates on the database. In addition to the basic operations provided by the data model, it is becoming more common to include concepts in the data model to specify the dynamic aspect or behavior of a database application. This allows the database designer to specify a set of valid user-defined operations that are allowed on the database objects.3 An example of a user-defined operation could be COMPUTE_GPA, which can be applied to a STUDENT object. On the other hand, generic operations to insert, delete, modify, or retrieve any kind of object are often included in the basic data model operations. Concepts to specify behavior are fundamental to object-oriented data models (see Chapter 12) but are also being incorporated in more traditional data models. For example, object-relational models (see Chapter 12) extend the basic relational model to include such concepts, among others. In the basic relational data model, there is a provision to attach behavior to the relations in the form of persis- tent stored modules, popularly known as stored procedures (see Chapter 10). 2 Sometimes the word model is used to denote a specific database description, or schema—for example, the marketing data model. We will not use this interpretation. 3 The inclusion of concepts to describe behavior reflects a trend whereby database design and software design activities are increasingly being combined into a single activity. Traditionally, specifying behavior is associated with software design. 2.1 Data Models, Schemas, and Instances 33 2.1.1 Categories of Data Models Many data models have been proposed, which we can categorize according to the types of concepts they use to describe the database structure. High-level or conceptual data models provide concepts that are close to the way many users per- ceive data, whereas low-level or physical data models provide concepts that describe the details of how data is stored on the computer storage media, typically magnetic disks. Concepts provided by physical data models are generally meant for computer specialists, not for end users. Between these two extremes is a class of representational (or implementation) data models,4 which provide concepts that may be easily understood by end users but that are not too far removed from the way data is orga- nized in computer storage. Representational data models hide many details of data storage on disk but can be implemented on a computer system directly. Conceptual data models use concepts such as entities, attributes, and relationships. An entity represents a real-world object or concept, such as an employee or a project from the miniworld that is described in the database. An attribute represents some property of interest that further describes an entity, such as the employee’s name or salary. A relationship among two or more entities represents an association among the entities, for example, a works-on relationship between an employee and a project. Chapter 3 presents the entity–relationship model—a popular high-level conceptual data model. Chapter 4 describes additional abstractions used for advanced modeling, such as generalization, specialization, and categories (union types). Representational or implementation data models are the models used most fre- quently in traditional commercial DBMSs. These include the widely used relational data model, as well as the so-called legacy data models—the network and hierarchical models—that have been widely used in the past. Part 3 of the text is devoted to the relational data model, and its constraints, operations, and languages.5 The SQL standard for relational databases is described in Chapters 6 and 7. Repre- sentational data models represent data by using record structures and hence are sometimes called record-based data models. We can regard the object data model as an example of a new family of higher-level implementation data models that are closer to conceptual data models. A standard for object databases called the ODMG object model has been proposed by the Object Data Management Group (ODMG). We describe the general characteristics of object databases and the object model proposed standard in Chapter 12. Object data models are also frequently utilized as high-level conceptual models, particu- larly in the software engineering domain. Physical data models describe how data is stored as files in the computer by repre- senting information such as record formats, record orderings, and access paths. An 4 The term implementation data model is not a standard term; we have introduced it to refer to the avail- able data models in commercial database systems. 5 A summary of the hierarchical and network data models is included in Appendices D and E. They are accessible from the book’s Web site. 34 Chapter 2 Database System Concepts and Architecture access path is a search structure that makes the search for particular database records efficient, such as indexing or hashing. We discuss physical storage tech- niques and access structures in Chapters 16 and 17. An index is an example of an access path that allows direct access to data using an index term or a keyword. It is similar to the index at the end of this text, except that it may be organized in a lin- ear, hierarchical (tree-structured), or some other fashion. Another class of data models is known as self-describing data models. The data storage in systems based on these models combines the description of the data with the data values themselves. In traditional DBMSs, the description (schema) is sepa- rated from the data. These models include XML (see Chapter 12) as well as many of the key-value stores and NOSQL systems (see Chapter 24) that were recently cre- ated for managing big data. 2.1.2 Schemas, Instances, and Database State In a data model, it is important to distinguish between the description of the database and the database itself. The description of a database is called the database schema, which is specified during database design and is not expected to change frequently.6 Most data models have certain conventions for displaying schemas as diagrams.7 A displayed schema is called a schema diagram. Figure 2.1 shows a schema diagram for the database shown in Figure 1.2; the diagram dis- plays the structure of each record type but not the actual instances of records. Figure 2.1 STUDENT Schema diagram for Name Student_number Class Major the database in Figure 1.2. COURSE Course_name Course_number Credit_hours Department PREREQUISITE Course_number Prerequisite_number SECTION Section_identifier Course_number Semester Year Instructor GRADE_REPORT Student_number Section_identifier Grade 6 Schema changes are usually needed as the requirements of the database applications change. Most database systems include operations for allowing schema changes. 7 It is customary in database parlance to use schemas as the plural for schema, even though schemata is the proper plural form. The word scheme is also sometimes used to refer to a schema. 2.1 Data Models, Schemas, and Instances 35 We call each object in the schema—such as STUDENT or COURSE—a schema construct. A schema diagram displays only some aspects of a schema, such as the names of record types and data items, and some types of constraints. Other aspects are not specified in the schema diagram; for example, Figure 2.1 shows neither the data type of each data item nor the relationships among the various files. Many types of constraints are not represented in schema diagrams. A constraint such as students majoring in computer science must take CS1310 before the end of their sophomore year is quite difficult to represent diagrammatically. The actual data in a database may change quite frequently. For example, the data- base shown in Figure 1.2 changes every time we add a new student or enter a new grade. The data in the database at a particular moment in time is called a database state or snapshot. It is also called the current set of occurrences or instances in the database. In a given database state, each schema construct has its own current set of instances; for example, the STUDENT construct will contain the set of indi- vidual student entities (records) as its instances. Many database states can be con- structed to correspond to a particular database schema. Every time we insert or delete a record or change the value of a data item in a record, we change one state of the database into another state. The distinction between database schema and database state is very important. When we define a new database, we specify its database schema only to the DBMS. At this point, the corresponding database state is the empty state with no data. We get the initial state of the database when the database is first populated or loaded with the initial data. From then on, every time an update operation is applied to the database, we get another database state. At any point in time, the database has a current state.8 The DBMS is partly responsible for ensuring that every state of the database is a valid state—that is, a state that satisfies the structure and constraints specified in the schema. Hence, specify- ing a correct schema to the DBMS is extremely important and the schema must be designed with utmost care. The DBMS stores the descriptions of the schema constructs and constraints—also called the meta-data—in the DBMS catalog so that DBMS software can refer to the schema whenever it needs to. The schema is sometimes called the intension, and a database state is called an extension of the schema. Although, as mentioned earlier, the schema is not supposed to change frequently, it is not uncommon that changes occasionally need to be applied to the schema as the application requirements change. For example, we may decide that another data item needs to be stored for each record in a file, such as adding the Date_of_birth to the STUDENT schema in Figure 2.1. This is known as schema evolution. Most modern DBMSs include some operations for schema evolution that can be applied while the database is operational. 8 The current state is also called the current snapshot of the database. It has also been called a database instance, but we prefer to use the term instance to refer to individual records. 36 Chapter 2 Database System Concepts and Architecture 2.2 Three-Schema Architecture and Data Independence Three of the four important characteristics of the database approach, listed in Section 1.3, are (1) use of a catalog to store the database description (schema) so as to make it self-describing, (2) insulation of programs and data (program-data and program-operation independence), and (3) support of multiple user views. In this section we specify an architecture for database systems, called the three-schema architecture,9 that was proposed to help achieve and visualize these characteristics. Then we discuss further the concept of data independence. 2.2.1 The Three-Schema Architecture The goal of the three-schema architecture, illustrated in Figure 2.2, is to separate the user applications from the physical database. In this architecture, schemas can be defined at the following three levels: 1. The internal level has an internal schema, which describes the physical storage structure of the database. The internal schema uses a physical data model and describes the complete details of data storage and access paths for the database. Figure 2.2 The three-schema End Users architecture. External Level External External... View View External/Conceptual Mapping Conceptual Level Conceptual Schema Conceptual/Internal Mapping Internal Level Internal Schema Stored Database 9 This is also known as the ANSI/SPARC (American National Standards Institute/ Standards Planning And Requirements Committee) architecture, after the committee that proposed it (Tsichritzis & Klug, 1978). 2.2 Three-Schema Architecture and Data Independence 37 2. The conceptual level has a conceptual schema, which describes the structure of the whole database for a community of users. The conceptual schema hides the details of physical storage structures and concentrates on describing enti- ties, data types, relationships, user operations, and constraints. Usually, a rep- resentational data model is used to describe the conceptual schema when a database system is implemented. This implementation conceptual schema is often based on a conceptual schema design in a high-level data model. 3. The external or view level includes a number of external schemas or user views. Each external schema describes the part of the database that a partic- ular user group is interested in and hides the rest of the database from that user group. As in the previous level, each external schema is typically imple- mented using a representational data model, possibly based on an external schema design in a high-level conceptual data model. The three-schema architecture is a convenient tool with which the user can visual- ize the schema levels in a database system. Most DBMSs do not separate the three levels completely and explicitly, but they support the three-schema architecture to some extent. Some older DBMSs may include physical-level details in the concep- tual schema. The three-level ANSI architecture has an important place in database technology development because it clearly separates the users’ external level, the database’s conceptual level, and the internal storage level for designing a database. It is very much applicable in the design of DBMSs, even today. In most DBMSs that support user views, external schemas are specified in the same data model that describes the conceptual-level information (for example, a relational DBMS like Oracle or SQLServer uses SQL for this). Notice that the three schemas are only descriptions of data; the actual data is stored at the physical level only. In the three-schema architecture, each user group refers to its own external schema. Hence, the DBMS must transform a request specified on an external schema into a request against the conceptual schema, and then into a request on the internal schema for processing over the stored database. If the request is a database retrieval, the data extracted from the stored database must be reformatted to match the user’s external view. The processes of transforming requests and results between levels are called mappings. These mappings may be time-consuming, so some DBMSs—especially those that are meant to support small databases—do not support external views. Even in such systems, however, it is nec- essary to transform requests between the conceptual and internal levels. 2.2.2 Data Independence The three-schema architecture can be used to further explain the concept of data independence, which can be defined as the capacity to change the schema at one level of a database system without having to change the schema at the next higher level. We can define two types of data independence: 1. Logical data independence is the capacity to change the conceptual schema without having to change external schemas or application programs. We 38 Chapter 2 Database System Concepts and Architecture may change the conceptual schema to expand the database (by adding a record type or data item), to change constraints, or to reduce the database (by removing a record type or data item). In the last case, external schemas that refer only to the remaining data should not be affected. For example, the external schema of Figure 1.5(a) should not be affected by changing the GRADE_REPORT file (or record type) shown in Figure 1.2 into the one shown in Figure 1.6(a). Only the view definition and the mappings need to be changed in a DBMS that supports logical data independence. After the conceptual schema undergoes a logical reorganization, application pro- grams that reference the external schema constructs must work as before. Changes to constraints can be applied to the conceptual schema without affecting the external schemas or application programs. 2. Physical data independence is the capacity to change the internal schema without having to change the conceptual schema. Hence, the external sche- mas need not be changed as well. Changes to the internal schema may be needed because some physical files were reorganized—for example, by cre- ating additional access structures—to improve the performance of retrieval or update. If the same data as before remains in the database, we should not have to change the conceptual schema. For example, providing an access path to improve retrieval speed of SECTION records (Figure 1.2) by semes- ter and year should not require a query such as list all sections offered in fall 2008 to be changed, although the query would be executed more efficiently by the DBMS by utilizing the new access path. Generally, physical data independence exists in most databases and file environ- ments where physical details, such as the exact location of data on disk, and hard- ware details of storage encoding, placement, compression, splitting, merging of records, and so on are hidden from the user. Applications remain unaware of these details. On the other hand, logical data independence is harder to achieve because it allows structural and constraint changes without affecting application programs—a much stricter requirement. Whenever we have a multiple-level DBMS, its catalog must be expanded to include information on how to map requests and data among the various levels. The DBMS uses additional software to accomplish these mappings by referring to the mapping information in the catalog. Data independence occurs because when the schema is changed at some level, the schema at the next higher level remains unchanged; only the mapping between the two levels is changed. Hence, application programs refer- ring to the higher-level schema need not be changed. 2.3 Database Languages and Interfaces In Section 1.4 we discussed the variety of users supported by a DBMS. The DBMS must provide appropriate languages and interfaces for each category of users. In this section we discuss the types of languages and interfaces provided by a DBMS and the user categories targeted by each interface. 2.3 Database Languages and Interfaces 39 2.3.1 DBMS Languages Once the design of a database is completed and a DBMS is chosen to implement the database, the first step is to specify conceptual and internal schemas for the data- base and any mappings between the two. In many DBMSs where no strict separa- tion of levels is maintained, one language, called the data definition language (DDL), is used by the DBA and by database designers to define both schemas. The DBMS will have a DDL compiler whose function is to process DDL statements in order to identify descriptions of the schema constructs and to store the schema description in the DBMS catalog. In DBMSs where a clear separation is maintained between the conceptual and internal levels, the DDL is used to specify the conceptual schema only. Another language, the storage definition language (SDL), is used to specify the internal schema. The mappings between the two schemas may be specified in either one of these languages. In most relational DBMSs today, there is no specific language that performs the role of SDL. Instead, the internal schema is specified by a combination of functions, parameters, and specifications related to storage of files. These permit the DBA staff to control indexing choices and mapping of data to storage. For a true three-schema architecture, we would need a third language, the view definition language (VDL), to specify user views and their mappings to the conceptual schema, but in most DBMSs the DDL is used to define both conceptual and external schemas. In relational DBMSs, SQL is used in the role of VDL to define user or application views as results of predefined queries (see Chapters 6 and 7). Once the database schemas are compiled and the database is populated with data, users must have some means to manipulate the database. Typical manipulations include retrieval, insertion, deletion, and modification of the data. The DBMS pro- vides a set of operations or a language called the data manipulation language (DML) for these purposes. In current DBMSs, the preceding types of languages are usually not considered dis- tinct languages; rather, a comprehensive integrated language is used that includes constructs for conceptual schema definition, view definition, and data manipula- tion. Storage definition is typically kept separate, since it is used for defining physi- cal storage structures to fine-tune the performance of the database system, which is usually done by the DBA staff. A typical example of a comprehensive database lan- guage is the SQL relational database language (see Chapters 6 and 7), which repre- sents a combination of DDL, VDL, and DML, as well as statements for constraint specification, schema evolution, and many other features. The SDL was a compo- nent in early versions of SQL but has been removed from the language to keep it at the conceptual and external levels only. There are two main types of DMLs. A high-level or nonprocedural DML can be used on its own to specify complex database operations concisely. Many DBMSs allow high-level DML statements either to be entered interactively from a display monitor or terminal or to be embedded in a general-purpose programming lan- guage. In the latter case, DML statements must be identified within the program so 40 Chapter 2 Database System Concepts and Architecture that they can be extracted by a precompiler and processed by the DBMS. A low- level or procedural DML must be embedded in a general-purpose programming language. This type of DML typically retrieves individual records or objects from the database and processes each separately. Therefore, it needs to use programming language constructs, such as looping, to retrieve and process each record from a set of records. Low-level DMLs are also called record-at-a-time DMLs because of this property. High-level DMLs, such as SQL, can specify and retrieve many records in a single DML statement; therefore, they are called set-at-a-time or set-oriented DMLs. A query in a high-level DML often specifies which data to retrieve rather than how to retrieve it; therefore, such languages are also called declarative. Whenever DML commands, whether high level or low level, are embedded in a general-purpose programming language, that language is called the host language and the DML is called the data sublanguage.10 On the other hand, a high-level DML used in a standalone interactive manner is called a query language. In gen- eral, both retrieval and update commands of a high-level DML may be used inter- actively and are hence considered part of the query language.11 Casual end users typically use a high-level query language to specify their requests, whereas programmers use the DML in its embedded form. For naive and paramet- ric users, there usually are user-friendly interfaces for interacting with the data- base; these can also be used by casual users or others who do not want to learn the details of a high-level query language. We discuss these types of interfaces next. 2.3.2 DBMS Interfaces User-friendly interfaces provided by a DBMS may include the following: Menu-based Interfaces for Web Clients or Browsing. These interfaces pres- ent the user with lists of options (called menus) that lead the user through the for- mulation of a request. Menus do away with the need to memorize the specific commands and syntax of a query language; rather, the query is composed step-by- step by picking options from a menu that is displayed by the system. Pull-down menus are a very popular technique in Web-based user interfaces. They are also often used in browsing interfaces, which allow a user to look through the contents of a database in an exploratory and unstructured manner. Apps for Mobile Devices. These interfaces present mobile users with access to their data. For example, banking, reservations, and insurance companies, among many others, provide apps that allow users to access their data through a mobile phone or mobile device. The apps have built-in programmed interfaces that typically 10 In object databases, the host and data sublanguages typically form one integrated language—for example, C++ with some extensions to support database functionality. Some relational systems also provide integrated languages—for example, Oracle’s PL/SQL. 11 According to the English meaning of the word query, it should really be used to describe retrievals only, not updates. 2.3 Database Languages and Interfaces 41 allow users to login using their account name and password; the apps then provide a limited menu of options for mobile access to the user data, as well as options such as paying bills (for banks) or making reservations (for reservation Web sites). Forms-based Interfaces. A forms-based interface displays a form to each user. Users can fill out all of the form entries to insert new data, or they can fill out only certain entries, in which case the DBMS will retrieve matching data for the remain- ing entries. Forms are usually designed and programmed for naive users as inter- faces to canned transactions. Many DBMSs have forms specification languages, which are special languages that help programmers specify such forms. SQL*Forms is a form-based language that specifies queries using a form designed in conjunc- tion with the relational database schema. Oracle Forms is a component of the Ora- cle product suite that provides an extensive set of features to design and build applications using forms. Some systems have utilities that define a form by letting the end user interactively construct a sample form on the screen. Graphical User Interfaces. A GUI typically displays a schema to the user in dia- grammatic form. The user then can specify a query by manipulating the diagram. In many cases, GUIs utilize both menus and forms. Natural Language Interfaces. These interfaces accept requests written in Eng- lish or some other language and attempt to understand them. A natural language interface usually has its own schema, which is similar to the database conceptual schema, as well as a dictionary of important words. The natural language interface refers to the words in its schema, as well as to the set of standard words in its dic- tionary, that are used to interpret the request. If the interpretation is successful, the interface generates a high-level query corresponding to the natural language request and submits it to the DBMS for processing; otherwise, a dialogue is started with the user to clarify the request. Keyword-based Database Search. These are somewhat similar to Web search engines, which accept strings of natural language (like English or Spanish) words and match them with documents at specific sites (for local search engines) or Web pages on the Web at large (for engines like Google or Ask). They use predefined indexes on words and use ranking functions to retrieve and present resulting docu- ments in a decreasing degree of match. Such “free form” textual query interfaces are not yet common in structured relational databases, although a research area called keyword-based querying has emerged recently for relational databases. Speech Input and Output. Limited use of speech as an input query and speech as an answer to a question or result of a request is becoming commonplace. Appli- cations with limited vocabularies, such as inquiries for telephone directory, flight arrival/departure, and credit card account information, are allowing speech for input and output to enable customers to access this information. The speech input is detected using a library of predefined words and used to set up the parameters that are supplied to the queries. For output, a similar conversion from text or num- bers into speech takes place. 42 Chapter 2 Database System Concepts and Architecture Interfaces for Parametric Users. Parametric users, such as bank tellers, often have a small set of operations that they must perform repeatedly. For example, a teller is able to use single function keys to invoke routine and repetitive transactions such as account deposits or withdrawals, or balance inquiries. Systems analysts and programmers design and implement a special interface for each known class of naive users. Usually a small set of abbreviated commands is included, with the goal of minimizing the number of keystrokes required for each request. Interfaces for the DBA. Most database systems contain privileged commands that can be used only by the DBA staff. These include commands for creating accounts, setting system parameters, granting account authorization, changing a schema, and reorganizing the storage structures of a database. 2.4 The Database System Environment A DBMS is a complex software system. In this section we discuss the types of soft- ware components that constitute a DBMS and the types of computer system soft- ware with which the DBMS interacts. 2.4.1 DBMS Component Modules Figure 2.3 illustrates, in a simplified form, the typical DBMS components. The figure is divided into two parts. The top part of the figure refers to the various users of the database environment and their interfaces. The lower part shows the internal modules of the DBMS responsible for storage of data and processing of transactions. The database and the DBMS catalog are usually stored on disk. Access to the disk is controlled primarily by the operating system (OS), which schedules disk read/write. Many DBMSs have their own buffer management module to sched- ule disk read/write, because management of buffer storage has a considerable effect on performance. Reducing disk read/write improves performance consid- erably. A higher-level stored data manager module of the DBMS controls access to DBMS information that is stored on disk, whether it is part of the database or the catalog. Let us consider the top part of Figure 2.3 first. It shows interfaces for the DBA staff, casual users who work with interactive interfaces to formulate queries, application programmers who create programs using some host programming languages, and parametric users who do data entry work by supplying parameters to predefined transactions. The DBA staff works on defining the database and tuning it by mak- ing changes to its definition using the DDL and other privileged commands. The DDL compiler processes schema definitions, specified in the DDL, and stores descriptions of the schemas (meta-data) in the DBMS catalog. The catalog includes information such as the names and sizes of files, names and data types of data items, storage details of each file, mapping information among schemas, and constraints. 2.4 The Database System Environment 43 Users: DBA Staff Casual Users Application Parametric Users Programmers DDL Privileged Interactive Application Statements Commands Query Programs Host DDL Query Language Compiler Precompiler Compiler Compiler Query DML Compiled Optimizer Compiler Transactions DBA Commands, Queries, and Transactions Runtime Stored Database Data System Processor Manager Catalog/ Concurrency Control/ Data Backup/Recovery Dictionary Subsystems Stored Database Input/Output Query and Transaction from Database Execution: Figure 2.3 Component modules of a DBMS and their interactions. In addition, the catalog stores many other types of information that are needed by the DBMS modules, which can then look up the catalog information as needed. Casual users and persons with occasional need for information from the database interact using the interactive query interface in Figure 2.3. We have not explicitly shown any menu-based or form-based or mobile interactions that are typically used to generate the interactive query automatically or to access canned transactions. These queries are parsed and validated for correctness of the query syntax, the names of files and data elements, and so on by a query compiler that compiles 44 Chapter 2 Database System Concepts and Architecture them into an internal form. This internal query is subjected to query optimization (discussed in Chapters 18 and 19). Among other things, the query optimizer is concerned with the rearrangement and possible reordering of operations, elimina- tion of redundancies, and use of efficient search algorithms during execution. It consults the system catalog for statistical and other physical information about the stored data and generates executable code that performs the necessary operations for the query and makes calls on the runtime processor. Application programmers write programs in host languages such as Java, C, or C++ that are submitted to a precompiler. The precompiler extracts DML commands from an application program written in a host programming language. These com- mands are sent to the DML compiler for compilation into object code for database access. The rest of the program is sent to the host language compiler. The object codes for the DML commands and the rest of the program are linked, forming a canned transaction whose executable code includes calls to the runtime database processor. It is also becoming increasingly common to use scripting languages such as PHP and Python to write database programs. Canned transactions are executed repeatedly by parametric users via PCs or mobile apps; these users simply supply the parameters to the transactions. Each execution is considered to be a separate transaction. An example is a bank payment transaction where the account number, payee, and amount may be supplied as parameters. In the lower part of Figure 2.3, the runtime database processor executes (1) the privileged commands, (2) the executable query plans, and (3) the canned transac- tions with runtime parameters. It works with the system catalog and may update it with statistics. It also works with the stored data manager, which in turn uses basic operating system services for carrying out low-level input/output (read/write) operations between the disk and main memory. The runtime database processor handles other aspects of data transfer, such as management of buffers in the main memory. Some DBMSs have their own buffer management module whereas others depend on the OS for buffer management. We have shown concurrency control and backup and recovery systems separately as a module in this figure. They are integrated into the working of the runtime database processor for purposes of transaction management. It is common to have the client program that accesses the DBMS running on a separate computer or device from the computer on which the database resides. The former is called the client computer running DBMS client software and the latter is called the database server. In many cases, the client accesses a middle computer, called the application server, which in turn accesses the database server. We elabo- rate on this topic in Section 2.5. Figure 2.3 is not meant to describe a specific DBMS; rather, it illustrates typical DBMS modules. The DBMS interacts with the operating system when disk accesses— to the database or to the catalog—are needed. If the computer system is shared by many users, the OS will schedule DBMS disk access requests and DBMS processing along with other processes. On the other hand, if the computer system is mainly dedicated to running the database server, the DBMS will control main memory 2.4 The Database System Environment 45 buffering of disk pages. The DBMS also interfaces with compilers for general- purpose host programming languages, and with application servers and client pro- grams running on separate machines through the system network interface. 2.4.2 Database System Utilities In addition to possessing the software modules just described, most DBMSs have database utilities that help the DBA manage the database system. Common utili- ties have the following types of functions: Loading. A loading utility is used to load existing data files—such as text files or sequential files—into the database. Usually, the current (source) for- mat of the data file and the desired (target) database file structure are speci- fied to the utility, which then automatically reformats the data and stores it in the database. With the proliferation of DBMSs, transferring data from one DBMS to another is becoming common in many organizations. Some vendors offer conversion tools that generate the appropriate loading pro- grams, given the existing source and target database storage descriptions (internal schemas). Backup. A backup utility creates a backup copy of the database, usually by dumping the entire database onto tape or other mass storage medium. The backup copy can be used to restore the database in case of catastrophic disk failure. Incremental backups are also often used, where only changes since the previous backup are recorded. Incremental backup is more complex, but saves storage space. Database storage reorganization. This utility can be used to reorganize a set of database files into different file organizations and create new access paths to improve performance. Performance monitoring. Such a utility monitors database usage and pro- vides statistics to the DBA. The DBA uses the statistics in making decisions such as whether or not to reorganize files or whether to add or drop indexes to improve performance. Other utilities may be available for sorting files, handling data compression, monitoring access by users, interfacing with the network, and performing other functions. 2.4.3 Tools, Application Environments, and Communications Facilities Other tools are often available to database designers, users, and the DBMS. CASE tools12 are used in the design phase of database systems. Another tool that can be quite useful in large organizations is an expanded data dictionary (or data repository) 12 Although CASE stands for computer-aided software engineering, many CASE tools are used primarily for database design. 46 Chapter 2 Database System Concepts and Architecture system. In addition to storing catalog information about schemas and constraints, the data dictionary stores other information, such as design decisions, usage stan- dards, application program descriptions, and user information. Such a system is also called an information repository. This information can be accessed directly by users or the DBA when needed. A data dictionary utility is similar to the DBMS catalog, but it includes a wider variety of information and is accessed mainly by users rather than by the DBMS software. Application development environments, such as PowerBuilder (Sybase) or JBuilder (Borland), have been quite popular. These systems provide an environ- ment for developing database applications and include facilities that help in many facets of database systems, including database design, GUI development, querying and updating, and application program development. The DBMS also needs to interface with communications software, whose function is to allow users at locations remote from the database system site to access the database through computer terminals, workstations, or personal computers. These are connected to the database site through data communications hardware such as Internet routers, phone lines, long-haul networks, local networks, or satellite com- munication devices. Many commercial database systems have communication packages that work with the DBMS. The integrated DBMS and data communica- tions system is called a DB/DC system. In addition, some distributed DBMSs are physically distributed over multiple machines. In this case, communications net- works are needed to connect the machines. These are often local area networks (LANs), but they can also be other types of networks. 2.5 Centralized and Client/Server Architectures for DBMSs 2.5.1 Centralized DBMSs Architecture Architectures for DBMSs have followed trends similar to those for general com- puter system architectures. Older architectures used mainframe computers to pro- vide the main processing for all system functions, including user application programs and user interface programs, as well as all the DBMS functionality. The reason was that in older systems, most users accessed the DBMS via computer ter- minals that did not have processing power and only provided display capabilities. Therefore, all processing was performed remotely on the computer system housing the DBMS, and only display information and controls were sent from the computer to the display terminals, which were connected to the central computer via various types of communications networks. As prices of hardware declined, most users replaced their terminals with PCs and workstations, and more recently with mobile devices. At first, database systems used these computers similarly to how they had used display terminals, so that the DBMS itself was still a centralized DBMS in which all the DBMS functionality, 2.5 Centralized and Client/Server Architectures for DBMSs 47 Terminals Display Display... Display Monitor Monitor Monitor Network Application Terminal Text... Programs Display Control Editors DBMS Compilers... Software Operating System System Bus Controller Controller Controller... CPU I/O Devices Memory Disk (Printers,... Figure 2.4 Tape Drives,...) Hardware/Firmware A physical centralized architecture. application program execution, and user interface processing were carried out on one machine. Figure 2.4 illustrates the physical components in a centralized archi- tecture. Gradually, DBMS systems started to exploit the available processing power at the user side, which led to client/server DBMS architectures. 2.5.2 Basic Client/Server Architectures First, we discuss client/server architecture in general; then we discuss how it is applied to DBMSs. The client/server architecture was developed to deal with com- puting environments in which a large number of PCs, workstations, file servers, printers, database servers, Web servers, e-mail servers, and other software and equipment are connected via a network. The idea is to define specialized servers with specific functionalities. For example, it is possible to connect a number of PCs or small workstations as clients to a file server that maintains the files of the client machines. Another machine can be designated as a printer server by being con- nected to various printers; all print requests by the clients are forwarded to this machine. Web servers or e-mail servers also fall into the specialized server cate- gory. The resources provided by specialized servers can be accessed by many client machines. The client machines provide the user with the appropriate interfaces to utilize these servers, as well as with local processing power to run local applications. This concept can be carried over to other software packages, with specialized pro- grams—such as a CAD (computer-aided design) package—being stored on specific server machines and being made accessible to multiple clients. Figure 2.5 illustrates 48 Chapter 2 Database System Concepts and Architecture Client Client Client... Network Figure 2.5 Logical two-tier client/server Print File DBMS... architecture. Server Server Server client/server architecture at the logical level; Figure 2.6 is a simplified diagram that shows the physical architecture. Some machines would be client sites only (for example, mobile devices or workstations/PCs that have only client software installed). Other machines would be dedicated servers, and others would have both client and server functionality. The concept of client/server architecture assumes an underlying framework that consists of many PCs/workstations and mobile devices as well as a smaller number of server machines, connected via wireless networks or LANs and other types of computer networks. A client in this framework is typically a user machine that pro- vides user interface capabilities and local processing. When a client requires access to additional functionality—such as database access—that does not exist at the cli- ent, it connects to a server that provides the needed functionality. A server is a sys- tem containing both hardware and software that can provide services to the client machines, such as file access, printing, archiving, or database access. In general, some machines install only client software, others only server software, and still others may include both client and server software, as illustrated in Figure 2.6. However, it is more common that client and server software usually run on separate Figure 2.6 Diskless Client Server Physical two-tier Client with Disk Server and Client client/server architecture. Server... Server Client Client CLIENT Client Site 1 Site 2 Site 3 Site n Communication Network 2.5 Centralized and Client/Server Architectures for DBMSs 49 machines. Two main types of basic DBMS architectures were created on this under- lying client/server framework: two-tier and three-tier.13 We discuss them next. 2.5.3 Two-Tier Client/Server Architectures for DBMSs In relational database management systems (RDBMSs), many of which started as centralized systems, the system components that were first moved to the client side were the user interface and application programs. Because SQL (see Chapters 6 and 7) provided a standard language for RDBMSs, this created a logical dividing point between client and server. Hence, the query and transac- tion functionality related to SQL processing remained on the server side. In such an architecture, the server is often called a query server or transaction server because it provides these two functionalities. In an RDBMS, the server is also often called an SQL server. The user interface programs and application programs can run on the client side. When DBMS access is required, the program establishes a connection to the DBMS (which is on the server side); once the connection is created, the client program can communicate with the DBMS. A standard called Open Database Connectivity (ODBC) provides an application programming interface (API), which allows client-side programs to call the DBMS, as long as both client and server machines have the necessary software installed. Most DBMS vendors pro- vide ODBC drivers for their systems. A client program can actually connect to several RDBMSs and send query and transaction requests using the ODBC API, which are then processed at the server sites. Any query results are sent back to the client program, which can process and display the results as needed. A related standard for the Java programming language, called JDBC, has also been defined. This allows Java client programs to access one or more DBMSs through a stan- dard interface. The architectures described here are called two-tier architectures because the soft- ware components are distributed over two systems: client and server. The advan- tages of this architecture are its simplicity and seamless compatibility with existing systems. The emergence of the Web changed the roles of clients and servers, leading to the three-tier architecture. 2.5.4 Three-Tier and n-Tier Architectures for Web Applications Many Web applications use an architecture called the three-tier architecture, which adds an intermediate layer between the client and the database server, as illustrated in Figure 2.7(a). 13 There are many other variations of client/server architectures. We discuss the two most basic ones here. 50 Chapter 2 Database System Concepts and Architecture GUI, Presentation Client Web Interface Layer Application Server Application Business or Programs, Logic Layer Web Server Web Pages Figure 2.7 Logical three-tier Database Database Database client/server Server Management Services architecture, with a System Layer couple of commonly used nomenclatures. (a) (b) This intermediate layer or middle tier is called the application server or the Web server, depending on the application. This server plays an intermediary role by running application programs and storing business rules (procedures or con- straints) that are used to access data from the database server. It can also improve database security by checking a client’s credentials before forwarding a request to the database server. Clients contain user interfaces and Web browsers. The inter- mediate server accepts requests from the client, processes the request and sends database queries and commands to the database server, and then acts as a conduit for passing (partially) processed data from the database server to the clients, where it may be processed further and filtered to be presented to the users. Thus, the user interface, application rules, and data access act as the three tiers. Figure 2.7(b) shows another view of the three-tier architecture used by database and other application package vendors. The presentation layer displays information to the user and allows data entry. The business logic layer handles intermediate rules and constraints before data is passed up to the user or down to the DBMS. The bottom layer includes all data management services. The middle layer can also act as a Web server, which retrieves query results from the database server and formats them into dynamic Web pages that are viewed by the Web browser at the client side. The client machine is typically a PC or mobile device connected to the Web. Other architectures have also been proposed. It is possible to divide the layers between the user and the stored data further into finer components, thereby giving rise to n-tier architectures, where n may be four or five tiers. Typically, the business logic layer is divided into multiple layers. Besides distributing programming and data throughout a network, n-tier applications afford the advantage that any one tier can run on an appropriate processor or operating system platform and can be handled independently. Vendors of ERP (enterprise resource planning) and CRM (customer relationship management) packages often use a middleware layer, which 2.6 Classification of Database Management Systems 51 accounts for the front-end modules (clients) communicating with a number of back-end databases (servers). Advances in encryption and decryption technology make it safer to transfer sensi- tive data from server to client in encrypted form, where it will be decrypted. The latter can be done by the hardware or by advanced software. This technology gives higher levels of data security, but the network security issues remain a major con- cern. Various technologies for data compression also help to transfer large amounts of data from servers to clients over wired and wireless networks. 2.6 Classification of Database Management Systems Several criteria can be used to classify DBMSs. The first is the data model on which the DBMS is based. The main data model used in many current commercial DBMSs is the relational data model, and the systems based on this model are known as SQL systems. The object data model has been implemented in some commercial systems but has not had widespread use. Recently, so-called big data systems, also known as key-value storage systems and NOSQL systems, use vari- ous data models: document-based, graph-based, column-based, and key-value data models. Many legacy applications still run on database systems based on the hierarchical and network data models. The relational DBMSs are evolving continuously, and, in particular, have been incorporating many of the concepts that were developed in object databases. This has led to a new class of DBMSs called object-relational DBMSs. We can catego- rize DBMSs based on the data model: relational, object, object-relational, NOSQL, key-value, hierarchical, network, and other. Some experimental DBMSs are based on the XML (eXtended Markup Language) model, which is a tree-structured data model. These have been called native XML DBMSs. Several commercial relational DBMSs have added XML interfaces and storage to their products. The second criterion used to classify DBMSs is the number of users supported by the system. Single-user systems support only one user at a time and are mostly used with PCs. Multiuser systems, which include the majority of DBMSs, support concurrent multiple users. The third criterion is the number of sites over which the database is distributed. A DBMS is centralized if the data is stored at a single computer site. A centralized DBMS can support multiple users, but the DBMS and the database reside totally at a single computer site. A distributed DBMS (DDBMS) can have the actual database and DBMS software distributed over many sites connected by a computer network. Big data systems are often massively distributed, with hundreds of sites. The data is often replicated on multiple sites so that failure of a site will not make some data unavailable. 52 Chapter 2 Database System Concepts and Architecture Homogeneous DDBMSs use the same DBMS software at all the sites, whereas heterogeneous DDBMSs can use different DBMS software at each site. It is also possible to develop middleware software to access several autonomous preexisting databases stored under heterogeneous DBMSs. This leads to a federated DBMS (or multidatabase system), in which the participating DBMSs are loosely coupled and have a degree of local autonomy. Many DDBMSs use client-server architecture, as we described in Section 2.5. The fourth criterion is cost. It is difficult to propose a classification of DBMSs based on cost. Today we have open source (free) DBMS products like MySQL and PostgreSQL that are supported by third-party vendors with additional services. The main RDBMS products are available as free examination 30-day copy versions as well as personal versions, which may cost under $100 and allow a fair amount of functionality. The giant systems are being sold in modular form with components to handle distribution, replication, parallel processing, mobile capability, and so on, and with a large number of parameters that must be defined for the configura- tion. Furthermore, they are sold in the form of licenses—site licenses allow unlim- ited use of the database system with any number of copies running at the customer site. Another type of license limits the number of concurrent users or the number of user seats at a location. Standalone single-user versions of some systems like Microsoft Access are sold per copy or included in the overall configuration of a desktop or laptop. In addition, data warehousing and mining features, as well as support for additional data types, are made available at extra cost. It is possible to pay millions of dollars for the installation and maintenance of large database sys- tems annually. We can also classify a DBMS on the basis of the types of access path options for storing files. One well-known family of DBMSs is based on inverted file structures. Finally, a DBMS can be general purpose or special purpose. When performance is a primary consideration, a special-purpose DBMS can be designed and built for a specific application; such a system cannot be used for other applications without major changes. Many airline reservations and telephone directory systems devel- oped in the past are special-purpose DBMSs. These fall into the category of online transaction processing (OLTP) systems, which must support a large number of concurrent transactions without imposing excessive delays. Let us briefly elaborate on the main criterion for classifying DBMSs: the data model. The relational data model represents a database as a collection of tables, where each table can be stored as a separate file. The database in Figure 1.2 resem- bles a basic relational representation. Most relational databases use the high-level query language called SQL and support a limited form of user views. We discuss the relational model and its languages and operations in Chapters 5 through 8, and techniques for programming relational applications in Chapters 10 and 11. The object data model defines a database in terms of objects, their properties, and their operations. Objects with the same structure and behavior belong to a class, and classes are organized into hierarchies (or acyclic graphs). The operations of 2.6 Classification of Database Management Systems 53 each class are specified in terms of predefined procedures called methods. Rela- tional DBMSs have been extending their models to incorporate object database concepts and other capabilities; these systems are referred to as object-relational or extended relational systems. We discuss object databases and object-relational systems in Chapter 12. Big data systems are based on various data models, with the following four data models most common. The key-value data model associates a unique key with each value (which can be a record or object) and provides very fast access to a value given its key. The document data model is based on JSON (Java Script Object Notation) and stores the data as documents, which somewhat resemble complex objects. The graph data model stores objects as graph nodes and rela- tionships among objects as directed graph edges. Finally, the column-based data models store the columns of rows clustered on disk pages for fast access and allow multiple versions of the data. We will discuss some of these in more detail in Chapter 24. The XML model has emerged as a standard for exchanging data over the Web and has been used as a basis for implementing several prototype native XML systems. XML uses hierarchical tree structures. It combines database concepts with concepts from document representation models. Data is represented as elements; with the use of tags, data can be nested to create complex tree structures. This model con- ceptually resembles the object model but uses different terminology. XML capabili- ties have been added to many commercial DBMS products. We present an overview of XML in Chapter 13. Two older, historically important data models, now known as legacy data models, are the network and hierarchical models. The network model represents data as record types and also represents a limited type of 1:N relationship, called a set type. A 1:N, or one-to-many, relationship relates one instance of a record to many record instances using some pointer linking mechanism in these models. The network model, also known as the CODASYL DBTG model,14 has an associated record-at- a-time language that must be embedded in a host programming language. The net- work DML was proposed in the 1971 Database Task Group (DBTG) Report as an extension of the COBOL language. The hierarchical model represents data as hierarchical tree structures. Each hierar- chy represents a number of related records. There is no standard language for the hierarchical model. A popular hierarchical DML is DL/1 of the IMS system. It dom- inated the DBMS market for over 20 years between 1965 and 1985. Its DML, called DL/1, was a de facto industry standard for a long time.15 14 CODASYL DBTG stands for Conference on Data Systems Languages Database Task Group, which is the committee that specified the network model and its language. 15 The full chapters on the network and hierarchical models from the second edition of this book are available from this book’s Companion Web site at http://www.aw.com/elmasri. 54 Chapter 2 Database System Concepts and Architecture 2.7 Summary In this chapter we introduced the main concepts used in database systems. We defined a data model and we distinguished three main categories: High-level or conceptual data models (based on entities and relationships) Low-level or physical data models Representational or implementation data models (record-based, object- oriented) We distinguished the schema, or description of a database, from the database itself. The schema does not change very often, whereas the database state changes every time data is inserted, deleted, or modified. Then we described the three-schema DBMS architecture, which allows three schema levels: An internal schema describes the physical storage structure of the database. A conceptual schema is a high-level description of the whole database. External schemas describe the views of different user groups. A DBMS that cleanly separates the three levels must have mappings among the schemas to transform requests and query results from one level to the next. Most DBMSs do not separate the three levels completely. We used the three-schema architecture to define the concepts of logical and physical data independence. Then we discussed the main types of languages and interfaces that DBMSs support. A data definition language (DDL) is used to define the database conceptual schema. In most DBMSs, the DDL also defines user views and, sometimes, storage struc- tures; in other DBMSs, separate languages or functions exist for specifying storage structures. This distinction is fading away in today’s relational implementations, with SQL serving as a catchall language to perform multiple roles, including view definition. The storage definition part (SDL) was included in SQL’s early versions, but is now typically implemented as special commands for the DBA in relational DBMSs. The DBMS compiles all schema definitions and stores their descriptions in the DBMS catalog. A data manipulation language (DML) is used for specifying database retrievals and updates. DMLs can be high level (set-oriented, nonprocedural) or low level (record- oriented, procedural). A high-level DML can be embedded in a host programming language, or it can be used as a standalone language; in the latter case it is often called a query language. We discussed different types of interfaces provided by DBMSs and the types of DBMS users with which each interface is associated. Then we discussed the database system environment, typical DBMS software modules, and DBMS utilities for helping users and the DBA staff perform their tasks. We continued with an overview of the two-tier and three-tier architectures for database applications. Exercises 55 Finally, we classified DBMSs according to several criteria: data model, number of users, number of sites, types of access paths, and cost. We discussed the availabil- ity of DBMSs and additional modules—from no cost in the form of open source software to configurations that annually cost millions to maintain. We also pointed out the variety of licensing arrangements for DBMS and related prod- ucts. The main classification of DBMSs is based on the data model. We briefly discussed the main data models used in current commercial DBMSs. Review Questions 2.1. Define the following terms: data model, database schema, database state, internal schema, conceptual schema, external schema, data independence, DDL, DML, SDL, VDL, query language, host language, data sublanguage, database utility, catalog, client/server architecture, three-tier architecture, and n-tier architecture. 2.2. Discuss the main categories of data models. What are the basic differences among the relational model, the object model, and the XML model? 2.3. What is the difference between a database schema and a database state? 2.4. Describe the three-schema architecture. Why do we need mappings among schema levels? How do different schema definition languages support this architecture? 2.5. What is the difference between logical data independence and physical data independence? Which one is harder to achieve? Why? 2.6. What is the difference between procedural and nonprocedural DMLs? 2.7. Discuss the different types of user-friendly interfaces and the types of users who typically use each. 2.8. With what other computer system software does a DBMS interact? 2.9. What is the difference between the two-tier and three-tier client/server architectures? 2.10. Discuss some types of database utilities and tools and their functions. 2.11. What is the additional functionality incorporated in n-tier architecture (n. 3)? Exercises 2.12. Think of different users for the database shown in Figure 1.2. What types of applications would each user need? To which user category would each belong, and what type of interface would each need? 56 Chapter 2 Database System Concepts and Architecture 2.13. Choose a database application with which you are familiar. Design a schema and show a sample database for that application, using the notation of Fig- ures 1.2 and 2.1. What types of additional information and constraints would you like to represent in the schema? Think of several users of your database, and design a view for each. 2.14. If you were designing a Web-based system to make airline reservations and sell airline tickets, which DBMS architecture would you choose from Section 2.5? Why? Why would the other architectures not be a good choice? 2.15. Consider Figure 2.1. In addition to constraints relating the values of col- umns in one table to columns in another table, there are also constraints that impose restrictions on values in a column or a combination of columns within a table. One such constraint dictates that a column or a group of col- umns must be unique across all rows in the table. For example, in the STUDENT table, the Student_number column must be unique (to prevent two different students from having the same Student_number). Identify the col- umn or the group of columns in the other tables that must be unique across all rows in the table. Selected Bibliography Many database textbooks, including Date (2004), Silberschatz et al. (2011), Ramak- rishnan and Gehrke (2003), Garcia-Molina et al. (2002, 2009), and Abiteboul et al. (1995), provide a discussion of the various database concepts presented here. Tsichritzis and Lochovsky (1982) is an early textbook on data models. Tsichritzis and Klug (1978) and Jardine (1977) present the three-schema architecture, which was first suggested in the DBTG CODASYL report (1971) and later in an American National Standards Institute (ANSI) report (1975). An in-depth analysis of the rela- tional data model and some of its possible extensions is given in Codd (1990). The proposed standard for object-oriented databases is described in Cattell et al. (2000). Many documents describing XML are available on the Web, such as XML (2005). Examples of database utilities are the ETI Connect, Analyze and Transform tools (http://www.eti.com) and the database administration tool, DBArtisan, from Embarcadero Technologies (http://www.embarcadero.com). This page intentionally left blank chapter 6 Basic SQL T he SQL language may be considered one of the major reasons for the commercial success of rela- tional databases. Because it became a standard for relational databases, users were less concerned about migrating their database applications from other types of database systems—for example, older network or hierarchical systems—to rela- tional systems. This is because even if the users became dissatisfied with the partic- ular relational DBMS product they were using, converting to another relational DBMS product was not expected to be too expensive and time-consuming because both systems followed the same language standards. In practice, of course, there are differences among various commercial relational DBMS packages. However, if the user is diligent in using only those features that are part of the standard, and if two relational DBMSs faithfully support the standard, then conversion between two systems should be simplified. Another advantage of having such a standard is that users may write statements in a database application program that can access data stored in two or more relational DBMSs without having to change the database sublanguage (SQL), as long as both/all of the relational DBMSs support standard SQL. This chapter presents the practical relational model, which is based on the SQL standard for commercial relational DBMSs, whereas Chapter 5 presented the most important concepts underlying the formal relational data model. In Chapter 8 (Sec- tions 8.1 through 8.5 ), we shall discuss the relational algebra operations, which are very important for understanding the types of requests that may be specified on a relational database. They are also important for query processing and optimization in a relational DBMS, as we shall see in Chapters 18 and 19. However, the relational algebra operations are too low-level for most commercial DBMS users because a query in relational algebra is written as a sequence of operations that, when exe- cuted, produces the required result. Hence, the user must specify how—that is, in what order—to execute the query operations. On the other hand, the SQL language 177 178 Chapter 6 Basic SQL provides a higher-level declarative language interface, so the user only specifies what the result is to be, leaving the actual optimization and decisions on how to execute the query to the DBMS. Although SQL includes some features from rela- tional algebra, it is based to a greater extent on the tuple relational calculus, which we describe in Section 8.6. However, the SQL syntax is more user-friendly than either of the two formal languages. The name SQL is presently expanded as Structured Query Language. Originally, SQL was called SEQUEL (Structured English QUEry Language) and was designed and implemented at IBM Research as the interface for an experimental relational database system called SYSTEM R. SQL is now the standard language for com- mercial relational DBMSs. The standardization of SQL is a joint effort by the American National Standards Institute (ANSI) and the International Standards Organization (ISO), and the first SQL standard is called SQL-86 or SQL1. A revised and much expanded standard called SQL-92 (also referred to as SQL2) was subsequently developed. The next standard that is well-recognized is SQL:1999, which started out as SQL3. Additional updates to the standard are SQL:2003 and SQL:2006, which added XML features (see Chapter 13) among other updates to the language. Another update in 2008 incorporated more object database features into SQL (see Chapter 12), and a further update is SQL:2011. We will try to cover the latest version of SQL as much as possible, but some of the newer features are discussed in later chapters. It is also not possible to cover the language in its entirety in this text. It is important to note that when new features are added to SQL, it usually takes a few years for some of these features to make it into the commercial SQL DBMSs. SQL is a comprehensive database language: It has statements for data definitions, queries, and updates. Hence, it is both a DDL and a DML. In addition, it has facili- ties for defining views on the database, for specifying security and authorization, for defining integrity constraints, and for specifying transaction controls. It also has rules for embedding SQL statements into a general-purpose programming lan- guage such as Java or C/C++.1 The later SQL standards (starting with SQL:1999) are divided into a core specifica- tion plus specialized extensions. The core is supposed to be implemented by all RDBMS vendors that are SQL compliant. The extensions can be implemented as optional modules to be purchased independently for specific database applications such as data mining, spatial data, temporal data, data warehousing, online analyti- cal processing (OLAP), multimedia data, and so on. Because the subject of SQL is both important and extensive, we devote two chap- ters to its basic features. In this chapter, Section 6.1 describes the SQL DDL com- mands for creating schemas and tables, and gives an overview of the basic data types in SQL. Section 6.2 presents how basic constraints such as key and referen- tial integrity are specified. Section 6.3 describes the basic SQL constructs for 1 Originally, SQL had statements for creating and dropping indexes on the files that represent relations, but these have been dropped from the SQL standard for some time. 6.1 SQL Data Definition and Data Types 179 specifying retrieval queries, and Section 6.4 describes the SQL commands for insertion, deletion, and update. In Chapter 7, we will describe more complex SQL retrieval queries, as well as the ALTER commands for changing the schema. We will also describe the CREATE ASSERTION statement, which allows the specification of more general constraints on the database, and the concept of triggers, which is presented in more detail in Chapter 26. We discuss the SQL facility for defining views on the database in Chap- ter 7. Views are also called virtual or derived tables because they present the user with what appear to be tables; however, the information in those tables is derived from previously defined tables. Section 6.5 lists some SQL features that are presented in other chapters of the book; these include object-oriented features in Chapter 12, XML in Chapter 13, transac- tion control in Chapter 20, active databases (triggers) in Chapter 26, online analyti- cal processing (OLAP) features in Chapter 29, and security/authorization in Chapter 30. Section 6.6 summarizes the chapter. Chapters 10 and 11 discuss the various database programming techniques for programming with SQL. 6.1 SQL Data Definition and Data Types SQL uses the terms table, row, and column for the formal relational model terms relation, tuple, and attribute, respectively. We will use the corresponding terms interchangeably. The main SQL command for data definition is the CREATE state- ment, which can be used to create schemas, tables (relations), types, and domains, as well as other constructs such as views, assertions, and triggers. Before we describe the relevant CREATE statements, we discuss schema and catalog concepts in Sec- tion 6.1.1 to place our discussion in perspective. Section 6.1.2 describes how tables are created, and Section 6.1.3 describes the most important data types available for attribute specification. Because the SQL specification is very large, we give a descrip- tion of the most important features. Further details can be found in the various SQL standards documents (see end-of-chapter bibliographic notes). 6.1.1 Schema and Catalog Concepts in SQL Early versions of SQL did not include the concept of a relational database schema; all tables (relations) were considered part of the same schema. The concept of an SQL schema was incorporated starting with SQL2 in order to group together tables and other constructs that belong to the same database application (in some systems, a schema is called a database). An SQL schema is identified by a schema name and includes an authorization identifier to indicate the user or account who owns the schema, as well as descriptors for each element in the schema. Schema elements include tables, types, constraints, views, domains, and other constructs (such as authorization grants) that describe the schema. A schema is created via the CREATE SCHEMA statement, which can include all the schema elements’ definitions. Alter- natively, the schema can be assigned a name and authorization identifier, and the 180 Chapter 6 Basic SQL elements can be defined later. For example, the following statement creates a schema called COMPANY owned by the user with authorization identifier ‘Jsmith’. Note that each statement in SQL ends with a semicolon. CREATE SCHEMA COMPANY AUTHORIZATION ‘Jsmith’; In general, not all users are authorized to create schemas and schema elements. The privilege to create schemas, tables, and other constructs must be explicitly granted to the relevant user accounts by the system administrator or DBA. In addition to the concept of a schema, SQL uses the concept of a catalog—a named collection of schemas.2 Database installations typically have a default environment and schema, so when a user connects and logs in to that database installation, the user can refer directly to tables and other constructs within that schema without having to specify a particular schema name. A catalog always contains a special schema called INFORMATION_SCHEMA, which provides information on all the schemas in the catalog and all the element descriptors in these schemas. Integrity constraints such as referential integrity can be defined between relations only if they exist in schemas within the same catalog. Schemas within the same catalog can also share certain elements, such as type and domain definitions. 6.1.2 The CREATE TABLE Command in SQL The CREATE TABLE command is used to specify a new relation by giving it a name and specifying its attributes and initial constraints. The attributes are specified first, and each attribute is given a name, a data type to specify its domain of values, and possibly attribute constraints, such as NOT NULL. The key, entity integrity, and ref- erential integrity constraints can be specified within the CREATE TABLE statement after the attributes are declared, or they can be added later using the ALTER TABLE command (see Chapter 7). Figure 6.1 shows sample data definition statements in SQL for the COMPANY relational database schema shown in Figure 3.7. Typically, the SQL schema in which the relations are declared is implicitly specified in the environment in which the CREATE TABLE statements are executed. Alterna- tively, we can explicitly attach the schema name to the relation name, separated by a period. For example, by writing CREATE TABLE COMPANY.EMPLOYEE rather than CREATE TABLE EMPLOYEE as in Figure 6.1, we can explicitly (rather than implicitly) make the EMPLOYEE table part of the COMPANY schema. The relations declared through CREATE TABLE statements are called base tables (or base relations); this means that the table and its rows are actually created 2 SQL also includes the concept of a cluster of catalogs. 6.1 SQL Data Definition and Data Types 181 CREATE TABLE EMPLOYEE Figure 6.1 ( Fname VARCHAR(15) NOT NULL, SQL CREATE Minit CHAR, TABLE data Lname VARCHAR(15) NOT NULL, definition statements Ssn CHAR(9) NOT NULL, for defining the Bdate DATE, COMPANY schema Address VARCHAR(30), from Figure 5.7. Sex CHAR, Salary DECIMAL(10,2), Super_ssn CHAR(9), Dno INT NOT NULL, PRIMARY KEY (Ssn), CREATE TABLE DEPARTMENT ( Dname VARCHAR(15) NOT NULL, Dnumber INT NOT NULL, Mgr_ssn CHAR(9) NOT NULL, Mgr_start_date DATE, PRIMARY KEY (Dnumber), UNIQUE (Dname), FOREIGN KEY (Mgr_ssn) REFERENCES EMPLOYEE(Ssn) ); CREATE TABLE DEPT_LOCATIONS ( Dnumber INT NOT NULL, Dlocation VARCHAR(15) NOT NULL, PRIMARY KEY (Dnumber, Dlocation), FOREIGN KEY (Dnumber) REFERENCES DEPARTMENT(Dnumber) ); CREATE TABLE PROJECT ( Pname VARCHAR(15)