Summary

This document provides a foundational overview of data engineering, including its definition, lifecycle, and historical context. It touches on concepts like data movement, manipulation, management, and the role of data engineers. The document also highlights the various stages in the data engineering lifecycle and considers essential aspects like security and data management.

Full Transcript

Data Engineering Definition Endless definitions of data engineering exist. Data engineering is all about the movement, manipulation, and management of data. Data engineering is a set of operations aimed at creating interfaces and mechanisms for the flow and access of information. It takes...

Data Engineering Definition Endless definitions of data engineering exist. Data engineering is all about the movement, manipulation, and management of data. Data engineering is a set of operations aimed at creating interfaces and mechanisms for the flow and access of information. It takes dedicated specialists—data engineers—to maintain data so that it remains available and usable by others. The data engineering field could be thought of as a superset of business intelligence and data warehousing that brings more elements from software engineering. Definition Data engineering is the development, implementation, and maintenance of systems and processes that take in raw data and produce high-quality, consistent information that supports downstream use cases, such as analysis and machine learning. Data engineering is the intersection of security, data management, DataOps, data architecture, orchestration, and software engineering. A data engineer manages the data engineering lifecycle, beginning with getting data from source systems and ending with serving data for use cases, such as analysis or machine learning. The Data Engineering Lifecycle History The early days: 1980 to 2000, from data warehousing to the web roots in data warehousing and business intelligence The early 2000s: The birth of contemporary data engineering innovations started decentralizing and breaking apart traditionally monolithic services. The “big data” era had begun (Apache Hadoop, AWS, Amazon S3, DynamoDB). The 2000s and 2010s: Big data engineering Simplification of open-source big data tools The 2020s: Engineering for the data lifecycle Trend is moving toward decentralized, modularized, managed, and highly abstracted tools. Modern data stack - representing a collection of off-the-shelf open source and third- party products assembled to make analysts’ lives easier https://mattturck.wpenginepowered.com/wp-content/uploads/2021/12/Data-and-AI-Landscape-2021-v3-small.jpg Data Engineering and Data Science Data Engineering is a subdiscipline of Data Science Data Engineering is standalone discipline Data Maturity and the Data Engineer The level of data engineering complexity within a company depends a great deal on the company’s data maturity. Data maturity is the progression toward higher data utilization, capabilities, and integration across the organization. Data maturity models have many versions, such as Data Management Maturity (DMM) The Background and Skills of a Data Engineer Business Responsibilities Know how to communicate with nontechnical and technical people. Understand how to scope and gather business and product requirements. Understand the cultural foundations of Agile, DevOps, and DataOps. Control costs. Learn continuously. Technical Responsibilities Understand how to build architectures that optimize performance and cost at a high level using prepackaged or homegrown components. All skills related to the data engineering lifecycle. Data Engineers and Other Technical Roles The Data Engineering Lifecycle Comprises stages that turn raw data ingredients into a useful product, ready for consumption by analysts, data scientists, ML engineers, and others. Stages Generation Storage Ingestion Transformation Serving data Undercurrents cut across multiple stages of the data engineering lifecycle Security Data management DataOps Data architecture Orchestration and software engineering. Generation A source system is the origin of the data used in the data engineering lifecycle. Relation database systems NoSQL IoT Data streams All must be evaluated thoroughly (essential characteristics, persistence, frequency of generation, errors, schema presence) Storage After ingesting data, you need a place to store it. Storage runs across the entire data engineering lifecycle, often occurring in multiple places in a data pipeline, with storage systems crossing over with source systems, ingestion, transformation, and serving. Key engineering questions are focused on choosing a storage system for a data warehouse, data lakehouse, database, or object storage Temperatures of data relate to understanding data access frequency Data that is most frequently accessed is called hot data Lukewarm data might be accessed every so often—say, every week or month Cold data is seldom queried and is appropriate for storing in an archival system. There is no one-size-fits-all universal storage recommendation. Ingestion Data ingestion from source systems, which are normally outside the direct control. Source systems and ingestion represent the most significant bottleneck. Batch versus streaming Data are inherently streaming Batch ingestion is simply a specialized and convenient way of processing this stream in large chunks (size, interval) Real-time (or near real-time) means that the data is available to a downstream system a short time after it is produced Push versus Pull Push model - a source system writes data out to a target, whether a database, object store or filesystem. Pull model - data is retrieved from the source system. Hybrid model Transformation Data needs to be changed from its original form into something useful for downstream use cases. Map data into the correct types Clean Normalize Selection and creation of new features Often included in other phases of the lifecycle Serving Data - Analytics Data has value when it’s used for practical purposes. Data vanity projects are a major risk for companies. Analytics is the core of most data endeavors. BI describe a business’s past and current state. Operational analytics focuses on the present and on the fine-grained details of operations consumed in real time. Embedded analytics (customer-facing) the request rate for reports goes up dramatically Multitenancy - Data engineers may choose to house data for many customers in common tables to allow a unified view for internal analytics and ML. This data is presented externally to individual customers through logical views with appropriately defined controls and filters. Serving Data - Machine learning The feature store is a recently developed tool that combines data engineering and ML engineering. Before investing a ton of resources into ML, take the time to build a solid data foundation. Reverse ETL Takes processed data from the output side of the data engineering lifecycle and feeds it back into source systems. Allows us to take analytics, scored models, etc., and feed these back into production systems or SaaS platforms. Especially important as businesses rely increasingly on SaaS and external platforms. Major Undercurrents Across the Data Engineering Lifecycle Data engineering now encompasses far more than tools and technology and incorporates traditional enterprise practices = undercurrents. Data Management and Governance Data management is the development, execution, and supervision of plans, policies, programs, and practices that deliver, control, protect, and enhance the value of data and information assets throughout their lifecycle. Data governance, including discoverability and accountability Data modeling and design Data lineage Storage and operations Data integration and interoperability Data lifecycle management Data systems for advanced analytics and ML Ethics and privacy Data governance is, first and foremost, a data management function to ensure the quality, integrity, security, and usability of the data collected by an organization. Main categories are discoverability, security, and accountability. Orchestration The process of coordinating many jobs to run as quickly and efficiently as possible on a scheduled cadence. Differs from schedulers (crons) which are aware only of time. An orchestration engine (Apache Airflow) builds in metadata on job dependencies, generally in the form of a directed acyclic graph (DAG). Orchestration systems also build job history capabilities, visualization, and alerting. DataOps DataOps maps the best practices of Agile methodology, DevOps, and statistical process control (SPC) to data. DataOps is a collection of technical practices, workflows, cultural norms, and architectural patterns that enable: Rapid innovation and experimentation delivering new insights to customers with increasing velocity Extremely high data quality and very low error rates Collaboration across complex arrays of people, technology, and environments Clear measurement, monitoring, and transparency of results Aims to improve the release and quality of data products. Data products differ from software products because of the way data is used. Data product is built around sound business logic and metrics, whose users make decisions or build models that perform automated actions. DataOps has three core technical elements: automation, monitoring and observability and incident response. Software Engineering Core data processing code (SQL) Development of open-source frameworks (Hadoop ecosystem) Streaming Infrastructure as Code (IaC) applies software engineering practices to the configuration and management of infrastructure. Pipeline as code is the core concept of present-day orchestration systems, which touch every stage of the data engineering lifecycle General-purpose problem-solving. Enterprise Architecture TOGAF “enterprise” in the context of “enterprise architecture” can denote an entire enterprise—encompassing all of its information and technology services, processes, and infrastructure—or a specific domain within the enterprise. Enterprise Architecture (EA) is an organizational model; an abstract representation of an Enterprise that aligns strategy, operations, and technology to create a roadmap for success. Enterprise architecture is the design of systems to support change in the enterprise, achieved by flexible and reversible decisions reached through careful evaluation of trade-offs. Data Architecture Reflects the current and future state of data systems that support an organization’s long-term data needs and strategy. Part of Enterprise architecture A description of the structure and interaction of the enterprise’s major types and sources of data, logical data assets, physical data assets, and data management resources. Data architecture is the design of systems to support the evolving data needs of an enterprise, achieved by flexible and reversible decisions reached through a careful evaluation of trade-offs. Data Architecture Serves business requirements with a common, widely reusable set of building blocks while maintaining flexibility and making appropriate trade-offs. Is flexible and easily maintainable. It is never finished. Principles of Good Data Architecture AWS 1. Choose common components Operational excellence wisely. Security 2. Plan for failure. Reliability Performance efficiency 3. Architect for scalability. Cost optimization 4. Architecture is leadership. Sustainability 5. Always be architecting. Google 6. Build loosely coupled systems. Design for automation. Be smart with state. 7. Make reversible decisions. Favor managed services. 8. Prioritize security. Practice defense in depth. 9. Embrace FinOps. Always be architecting. Examples and Types of Data Architecture Data warehouse and data mart Data lake Data lakehouse Modern data stack Lambda architecture Kappa architecture Architecture for IoT Data mesh Data Warehouse A subject-oriented, integrated, nonvolatile, and time-variant collection of data in support of management’s decisions. Central data hub used for reporting and analysis. Today the scalable, pay-as-you-go model has made cloud data warehouses accessible even to tiny companies. The organizational data warehouse architecture has two main characteristics: Separates analytics processes (OLAP) from production databases (online transaction processing) Centralizes and organizes data ETL vs ELT The ELT data warehouse architecture, data gets moved more or less directly from production systems into a staging area in the data warehouse. Staging in this setting indicates that the data is in a raw form. Data is processed in batches, and transformed Output is written into tables and views for analytics. Data Marts Data mart is a more refined subset of a warehouse designed to serve analytics and reporting, focused on a single suborganization, department, or line of business. Makes data more easily accessible to analysts and report developers. Data marts provide an additional stage of transformation beyond that provided by the initial ETL or ELT pipelines. Data lake vs data lakehouse Simply dumps all of data—structured and unstructured—into a central location. Data lake 1.0 made solid contributions but generally failed due to complexity Data lakehouse introduced by Datablicks. Lakehouse incorporates the controls, data management, and data structures found in a data warehouse while still housing data in object storage and supporting a variety of query and transformation engines. Lakehouse supports ACID Modern data stack Use cloud-based, plug-and-play, easy-to-use, off-the-shelf components to create a modular and cost-effective data architecture. Typical component include data pipelines, storage, transformation, data anagement/governance, monitoring, visualization, and exploration. Lambda architecture Reaction to the request to analyse streamed data Represents systems operating independently of each other—batch, streaming, and serving. The source system is ideally immutable and append-only, sending data to two destinations for processing: stream, and batch. Has several shortcomings. Kappa Architecture Why not just use a stream-processing platform as the backbone for all data handling—ingestion, storage, and serving. Represents a true event-based architecture. Real-time and batch processing can be applied seamlessly to the same data by reading the live event stream directly and replaying large chunks of data for batch processing. It is not yet widely adopted. Architecture for IoT The Internet of Things (IoT) is the distributed collection of devices, aka things— computers, sensors, mobile devices, smart home devices, and anything else with an internet connection. Data mesh Recent response to sprawling monolithic data platforms. Attempts to invert the challenges of centralized data architecture, taking the concepts of domain-driven design. A big part of the data mesh is decentralization. Resources Joe Reis and Matt Housley (2022). Fundamentals of Data Engineering. Plan and Build Robust Data Systems. O’Reilly Media. 554 pp. Matt Bornstein, Jennifer Li, and Martin Casado (2020). Emerging Architectures for Modern Data Infrastructure Chapter 20 Transaction Management Transparencies © Pearson Education Limited 1995, 2005 1 1 Chapter 20 - Objectives  Function and importance of transactions.  Properties of transactions.  Concurrency Control – Meaning of serializability. – How locking can ensure serializability. – Deadlock and how it can be resolved. – How timestamping can ensure serializability. – Optimistic concurrency control. – Granularity of locking. 2 © Pearson Education Limited 1995, 2005 2 Chapter 20 - Objectives  Recovery Control – Some causes of database failure. – Purpose of transaction log file. – Purpose of checkpointing. – How to recover following database failure.  Alternative models for long duration transactions. 3 © Pearson Education Limited 1995, 2005 3 Transaction Support Transaction Action, or series of actions, carried out by user or application, which reads or updates contents of database.  Logical unit of work on the database.  Application program is series of transactions with non- database processing in between.  Transforms database from one consistent state to another, although consistency may be violated during transaction. 4 © Pearson Education Limited 1995, 2005 4 Example Transaction 5 © Pearson Education Limited 1995, 2005 5 Transaction Support  Can have one of two outcomes: – Success - transaction commits and database reaches a new consistent state. – Failure - transaction aborts, and database must be restored to consistent state before it started. – Such a transaction is rolled back or undone.  Committed transaction cannot be aborted.  Aborted transaction that is rolled back can be restarted later. 6 © Pearson Education Limited 1995, 2005 6 State Transition Diagram for Transaction 7 © Pearson Education Limited 1995, 2005 7 Properties of Transactions Four basic (ACID) properties of a transaction are: Atomicity ‘All or nothing’ property. Consistency Must transform database from one consistent state to another. Isolation Partial effects of incomplete transactions should not be visible to other transactions. Durability Effects of a committed transaction are permanent and must not be lost because of later failure. 8 © Pearson Education Limited 1995, 2005 8 DBMS Transaction Subsystem 9 © Pearson Education Limited 1995, 2005 9 Concurrency Control Process of managing simultaneous operations on the database without having them interfere with one another.  Prevents interference when two or more users are accessing database simultaneously and at least one is updating data.  Although two transactions may be correct in themselves, interleaving of operations may produce an incorrect result. 10 © Pearson Education Limited 1995, 2005 10 Need for Concurrency Control  Three examples of potential problems caused by concurrency: – Lost update problem. – Uncommitted dependency problem. – Inconsistent analysis problem. 11 © Pearson Education Limited 1995, 2005 11 Lost Update Problem  Successfully completed update is overridden by another user.  T1 withdrawing £10 from an account with balx, initially £100.  T2 depositing £100 into same account.  Serially, final balance would be £190. 12 © Pearson Education Limited 1995, 2005 12 Lost Update Problem  Loss of T2’s update avoided by preventing T1 from reading balx until after update. 13 © Pearson Education Limited 1995, 2005 13 Uncommitted Dependency Problem  Occurs when one transaction can see intermediate results of another transaction before it has committed.  T4 updates balx to £200 but it aborts, so balx should be back at original value of £100.  T3 has read new value of balx (£200) and uses value as basis of £10 reduction, giving a new balance of £190, instead of £90. 14 © Pearson Education Limited 1995, 2005 14 Uncommitted Dependency Problem  Problem avoided by preventing T3 from reading balx until after T4 commits or aborts. 15 © Pearson Education Limited 1995, 2005 15 Inconsistent Analysis Problem  Occurs when transaction reads several values but second transaction updates some of them during execution of first.  Sometimes referred to as dirty read or unrepeatable read.  T6 is totaling balances of account x (£100), account y (£50), and account z (£25).  Meantime, T5 has transferred £10 from balx to balz, so T6 now has wrong result (£10 too high). 16 © Pearson Education Limited 1995, 2005 16 Inconsistent Analysis Problem  Problem avoided by preventing T6 from reading balx and balz until after T5 completed updates. 17 © Pearson Education Limited 1995, 2005 17 Serializability  Objective of a concurrency control protocol is to schedule transactions in such a way as to avoid any interference.  Could run transactions serially, but this limits degree of concurrency or parallelism in system.  Serializability identifies those executions of transactions guaranteed to ensure consistency. 18 © Pearson Education Limited 1995, 2005 18 Serializability Schedule Sequence of reads/writes by set of concurrent transactions. Serial Schedule Schedule where operations of each transaction are executed consecutively without any interleaved operations from other transactions.  No guarantee that results of all serial executions of a given set of transactions will be identical. 19 © Pearson Education Limited 1995, 2005 19 Nonserial Schedule  Schedule where operations from set of concurrent transactions are interleaved.  Objective of serializability is to find nonserial schedules that allow transactions to execute concurrently without interfering with one another.  In other words, want to find nonserial schedules that are equivalent to some serial schedule. Such a schedule is called serializable. 20 © Pearson Education Limited 1995, 2005 20 Serializability  In serializability, ordering of read/writes is important: (a) If two transactions only read a data item, they do not conflict and order is not important. (b) If two transactions either read or write completely separate data items, they do not conflict and order is not important. (c) If one transaction writes a data item and another reads or writes same data item, order of execution is important. 21 © Pearson Education Limited 1995, 2005 21 Example of Conflict Serializability 22 © Pearson Education Limited 1995, 2005 22 Serializability  Conflict serializable schedule orders any conflicting operations in same way as some serial execution.  Under constrained write rule (transaction updates data item based on its old value, which is first read), use precedence graph to test for serializability. 23 © Pearson Education Limited 1995, 2005 23 Precedence Graph  Create: – node for each transaction; – a directed edge Ti Tj, if Tj reads the value of an item written by TI; – a directed edge Ti Tj, if Tj writes a value into an item after it has been read by Ti.  If precedence graph contains cycle schedule is not conflict serializable. 24 © Pearson Education Limited 1995, 2005 24 Example - Non-conflict serializable schedule  T9 is transferring £100 from one account with balance balx to another account with balance baly.  T10 is increasing balance of these two accounts by 10%.  Precedence graph has a cycle and so is not serializable. 25 © Pearson Education Limited 1995, 2005 25 Example - Non-conflict serializable schedule 26 © Pearson Education Limited 1995, 2005 26 View Serializability  Offers less stringent definition of schedule equivalence than conflict serializability.  Two schedules S1 and S2 are view equivalent if: – For each data item x, if Ti reads initial value of x in S1, Ti must also read initial value of x in S2. – For each read on x by Ti in S1, if value read by x is written by Tj, Ti must also read value of x produced by Tj in S2. – For each data item x, if last write on x performed by Ti in S1, same transaction must perform final write on x in S2. 27 © Pearson Education Limited 1995, 2005 27 View Serializability  Schedule is view serializable if it is view equivalent to a serial schedule.  Every conflict serializable schedule is view serializable, although converse is not true.  It can be shown that any view serializable schedule that is not conflict serializable contains one or more blind writes.  In general, testing whether schedule is serializable is NP-complete. 28 © Pearson Education Limited 1995, 2005 28 Example - View Serializable schedule 29 © Pearson Education Limited 1995, 2005 29 Recoverability  Serializability identifies schedules that maintain database consistency, assuming no transaction fails.  Could also examine recoverability of transactions within schedule.  If transaction fails, atomicity requires effects of transaction to be undone.  Durability states that once transaction commits, its changes cannot be undone (without running another, compensating, transaction). 30 © Pearson Education Limited 1995, 2005 30 Recoverable Schedule A schedule where, for each pair of transactions Ti and Tj, if Tj reads a data item previously written by Ti, then the commit operation of Ti precedes the commit operation of Tj. 31 © Pearson Education Limited 1995, 2005 31 Concurrency Control Techniques  Two basic concurrency control techniques: – Locking, – Timestamping.  Both are conservative approaches: delay transactions in case they conflict with other transactions.  Optimistic methods assume conflict is rare and only check for conflicts at commit. 32 © Pearson Education Limited 1995, 2005 32 Locking Transaction uses locks to deny access to other transactions and so prevent incorrect updates.  Most widely used approach to ensure serializability.  Generally, a transaction must claim a shared (read) or exclusive (write) lock on a data item before read or write.  Lock prevents another transaction from modifying item or even reading it, in the case of a write lock. 33 © Pearson Education Limited 1995, 2005 33 Locking - Basic Rules  If transaction has shared lock on item, can read but not update item.  If transaction has exclusive lock on item, can both read and update item.  Reads cannot conflict, so more than one transaction can hold shared locks simultaneously on same item.  Exclusive lock gives transaction exclusive access to that item. 34 © Pearson Education Limited 1995, 2005 34 Locking - Basic Rules  Some systems allow transaction to upgrade read lock to an exclusive lock, or downgrade exclusive lock to a shared lock. 35 © Pearson Education Limited 1995, 2005 35 Example - Non-conflict serializable schedule S = {write_lock(T9, balx), read(T9, balx), write(T9, balx), unlock(T9, balx), write_lock(T10, balx), read(T10, balx), write(T10, balx), unlock(T10, balx), write_lock(T10, baly), read(T10, baly), write(T10, baly), unlock(T10, baly), commit(T10), write_lock(T9, baly), read(T9, baly), write(T9, baly), unlock(T9, baly), commit(T9) } 36 © Pearson Education Limited 1995, 2005 36 Example - Incorrect Locking Schedule  For two transactions above, a valid schedule using these rules is: S = {write_lock(T9, balx), read(T9, balx), write(T9, balx), unlock(T9, balx), write_lock(T10, balx), read(T10, balx), write(T10, balx), unlock(T10, balx), write_lock(T10, baly), read(T10, baly), write(T10, baly), unlock(T10, baly), commit(T10), write_lock(T9, baly), read(T9, baly), write(T9, baly), unlock(T9, baly), commit(T9) } 37 © Pearson Education Limited 1995, 2005 37 Example - Incorrect Locking Schedule  If at start, balx = 100, baly = 400, result should be: – balx = 220, baly = 330, if T9 executes before T10, or – balx = 210, baly = 340, if T10 executes before T9.  However, result gives balx = 220 and baly = 340.  S is not a serializable schedule. 38 © Pearson Education Limited 1995, 2005 38 Example - Incorrect Locking Schedule  Problem is that transactions release locks too soon, resulting in loss of total isolation and atomicity.  To guarantee serializability, need an additional protocol concerning the positioning of lock and unlock operations in every transaction. 39 © Pearson Education Limited 1995, 2005 39 Two-Phase Locking (2PL) Transaction follows 2PL protocol if all locking operations precede first unlock operation in the transaction.  Two phases for transaction: – Growing phase - acquires all locks but cannot release any locks. – Shrinking phase - releases locks but cannot acquire any new locks. 40 © Pearson Education Limited 1995, 2005 40 Preventing Lost Update Problem using 2PL 41 © Pearson Education Limited 1995, 2005 41 Preventing Uncommitted Dependency Problem using 2PL 42 © Pearson Education Limited 1995, 2005 42 Preventing Inconsistent Analysis Problem using 2PL 43 © Pearson Education Limited 1995, 2005 43 Cascading Rollback  If every transaction in a schedule follows 2PL, schedule is serializable.  However, problems can occur with interpretation of when locks can be released. 44 © Pearson Education Limited 1995, 2005 44 Cascading Rollback 45 © Pearson Education Limited 1995, 2005 45 Cascading Rollback  Transactions conform to 2PL.  T14 aborts.  Since T15 is dependent on T14, T15 must also be rolled back. Since T16 is dependent on T15, it too must be rolled back.  This is called cascading rollback.  To prevent this with 2PL, leave release of all locks until end of transaction.  Rigorous 2PL - release of all locks until the end of the transaction  Strict 2PL - holds only exclusive locks until the end of the transaction 46 © Pearson Education Limited 1995, 2005 46 Concurrency Control with Index Structures  Could treat each page of index as a data item and apply 2PL.  However, as indexes will be frequently accessed, particularly higher levels, this may lead to high lock contention.  Can make two observations about index traversal: – Search path starts from root and moves down to leaf nodes but search never moves back up tree. Thus, once a lower-level node has been accessed, higher-level nodes in that path will not be used again. 47 © Pearson Education Limited 1995, 2005 47 Concurrency Control with Index Structures – When new index value (key and pointer) is being inserted into a leaf node, then if node is not full, insertion will not cause changes to higher-level nodes.  Suggests only have to exclusively lock leaf node in such a case, and only exclusively lock higher- level nodes if node is full and has to be split. 48 © Pearson Education Limited 1995, 2005 48 Concurrency Control with Index Structures  Thus, can derive following locking strategy: – For searches, obtain shared locks on nodes starting at root and proceeding downwards along required path. Release lock on node once lock has been obtained on the child node. – For insertions, conservative approach would be to obtain exclusive locks on all nodes as we descend tree to the leaf node to be modified. – For more optimistic approach, obtain shared locks on all nodes as we descend to leaf node to be modified, where obtain exclusive lock. If leaf node has to split, upgrade shared lock on parent to exclusive lock. If this node also has to split, continue to upgrade locks at next higher level. 49 © Pearson Education Limited 1995, 2005 49 Deadlock An impasse that may result when two (or more) transactions are each waiting for locks held by the other to be released. 50 © Pearson Education Limited 1995, 2005 50 Deadlock  Only one way to break deadlock: abort one or more of the transactions.  Deadlock should be transparent to user, so DBMS should restart transaction(s).  Three general techniques for handling deadlock: – Timeouts. – Deadlock prevention. – Deadlock detection and recovery. 51 © Pearson Education Limited 1995, 2005 51 Timeouts  Transaction that requests lock will only wait for a system-defined period of time.  If lock has not been granted within this period, lock request times out.  In this case, DBMS assumes transaction may be deadlocked, even though it may not be, and it aborts and automatically restarts the transaction. 52 © Pearson Education Limited 1995, 2005 52 Deadlock Prevention  DBMS looks ahead to see if transaction would cause deadlock and never allows deadlock to occur.  Could order transactions using transaction timestamps: – Wait-Die - only an older transaction can wait for younger one, otherwise transaction is aborted (dies) and restarted with same timestamp. 53 © Pearson Education Limited 1995, 2005 53 Deadlock Prevention – Wound-Wait - only a younger transaction can wait for an older one. If older transaction requests lock held by younger one, younger one is aborted (wounded). 54 © Pearson Education Limited 1995, 2005 54 Deadlock Detection and Recovery  DBMS allows deadlock to occur but recognizes it and breaks it.  Usually handled by construction of wait-for graph (WFG) showing transaction dependencies: – Create a node for each transaction. – Create edge Ti -> Tj, if Ti waiting to lock item locked by Tj.  Deadlock exists if and only if WFG contains cycle.  WFG is created at regular intervals. 55 © Pearson Education Limited 1995, 2005 55 Example - Wait-For-Graph (WFG) 56 © Pearson Education Limited 1995, 2005 56 Recovery from Deadlock Detection  Several issues: – choice of deadlock victim; – how far to roll a transaction back; – avoiding starvation. 57 © Pearson Education Limited 1995, 2005 57 Timestamping  Transactions ordered globally so that older transactions, transactions with smaller timestamps, get priority in the event of conflict.  Conflict is resolved by rolling back and restarting transaction.  No locks so no deadlock. 58 © Pearson Education Limited 1995, 2005 58 Timestamping Timestamp A unique identifier created by DBMS that indicates relative starting time of a transaction.  Can be generated by using system clock at time transaction started, or by incrementing a logical counter every time a new transaction starts. 59 © Pearson Education Limited 1995, 2005 59 Timestamping  Read/write proceeds only if last update on that data item was carried out by an older transaction.  Otherwise, transaction requesting read/write is restarted and given a new timestamp.  Also timestamps for data items: – read-timestamp - timestamp of last transaction to read item; – write-timestamp - timestamp of last transaction to write item. 60 © Pearson Education Limited 1995, 2005 60 Timestamping - Read(x)  Consider a transaction T with timestamp ts(T): ts(T) < write_timestamp(x) – x already updated by younger (later) transaction. – Transaction must be aborted and restarted with a new timestamp. 61 © Pearson Education Limited 1995, 2005 61 Timestamping - Read(x) ts(T) < read_timestamp(x) – x already read by younger transaction. – Roll back transaction and restart it using a later timestamp. 62 © Pearson Education Limited 1995, 2005 62 Timestamping - Write(x) ts(T) < write_timestamp(x) – x already written by younger transaction. – Write can safely be ignored - ignore obsolete write rule.  Otherwise, operation is accepted and executed. 63 © Pearson Education Limited 1995, 2005 63 Example – Basic Timestamp Ordering 64 © Pearson Education Limited 1995, 2005 64 Comparison of Methods 65 © Pearson Education Limited 1995, 2005 65 Multiversion Timestamp Ordering  Versioning of data can be used to increase concurrency.  Basic timestamp ordering protocol assumes only one version of data item exists, and so only one transaction can access data item at a time.  Can allow multiple transactions to read and write different versions of same data item, and ensure each transaction sees consistent set of versions for all data items it accesses. 66 © Pearson Education Limited 1995, 2005 66 Multiversion Timestamp Ordering  In multiversion concurrency control, each write operation creates new version of data item while retaining old version.  When transaction attempts to read data item, system selects one version that ensures serializability.  Versions can be deleted once they are no longer required. 67 © Pearson Education Limited 1995, 2005 67 Optimistic Techniques  Based on assumption that conflict is rare and more efficient to let transactions proceed without delays to ensure serializability.  At commit, check is made to determine whether conflict has occurred.  If there is a conflict, transaction must be rolled back and restarted.  Potentially allows greater concurrency than traditional protocols. 68 © Pearson Education Limited 1995, 2005 68 Optimistic Techniques  Three phases: – Read – Validation – Write 69 © Pearson Education Limited 1995, 2005 69 Optimistic Techniques - Read Phase  Extends from start until immediately before commit.  Transaction reads values from database and stores them in local variables. Updates are applied to a local copy of the data. 70 © Pearson Education Limited 1995, 2005 70 Optimistic Techniques - Validation Phase  Follows the read phase.  For read-only transaction, checks that data read are still current values. If no interference, transaction is committed, else aborted and restarted.  For update transaction, checks transaction leaves database in a consistent state, with serializability maintained. 71 © Pearson Education Limited 1995, 2005 71 Optimistic Techniques - Write Phase  Follows successful validation phase for update transactions.  Updates made to local copy are applied to the database. 72 © Pearson Education Limited 1995, 2005 72 Granularity of Data Items  Size of data items chosen as unit of protection by concurrency control protocol.  Ranging from coarse to fine: – The entire database. – A file. – A page (or area or database spaced). – A record. – A field value of a record. 73 © Pearson Education Limited 1995, 2005 73 Granularity of Data Items  Tradeoff: – coarser, the lower the degree of concurrency; – finer, more locking information that is needed to be stored.  Best item size depends on the types of transactions. 74 © Pearson Education Limited 1995, 2005 74 Hierarchy of Granularity  Could represent granularity of locks in a hierarchical structure.  Root node represents entire database, level 1s represent files, etc.  When node is locked, all its descendants are also locked.  DBMS should check hierarchical path before granting lock. 75 © Pearson Education Limited 1995, 2005 75 Hierarchy of Granularity  Intention lock could be used to lock all ancestors of a locked node.  Intention locks can be read or write. Applied top-down, released bottom-up. 76 © Pearson Education Limited 1995, 2005 76 Levels of Locking 77 © Pearson Education Limited 1995, 2005 77 Database Recovery Process of restoring database to a correct state in the event of a failure.  Need for Recovery Control – Two types of storage: volatile (main memory) and nonvolatile. – Volatile storage does not survive system crashes. – Stable storage represents information that has been replicated in several nonvolatile storage media with independent failure modes. 78 © Pearson Education Limited 1995, 2005 78 Types of Failures  System crashes, resulting in loss of main memory.  Media failures, resulting in loss of parts of secondary storage.  Application software errors.  Natural physical disasters.  Carelessness or unintentional destruction of data or facilities.  Sabotage. 79 © Pearson Education Limited 1995, 2005 79 Transactions and Recovery  Transactions represent basic unit of recovery.  Recovery manager responsible for atomicity and durability.  If failure occurs between commit and database buffers being flushed to secondary storage then, to ensure durability, recovery manager has to redo (rollforward) transaction’s updates. 80 © Pearson Education Limited 1995, 2005 80 Transactions and Recovery  If transaction had not committed at failure time, recovery manager has to undo (rollback) any effects of that transaction for atomicity.  Partial undo - only one transaction has to be undone.  Global undo - all transactions have to be undone. 81 © Pearson Education Limited 1995, 2005 81 Example  DBMS starts at time t0, but fails at time tf. Assume data for transactions T2 and T3 have been written to secondary storage.  T1 and T6 have to be undone. In absence of any other information, recovery manager has to redo T2, T3, T4, and T5. 82 © Pearson Education Limited 1995, 2005 82 Recovery Facilities  DBMS should provide following facilities to assist with recovery: – Backup mechanism, which makes periodic backup copies of database. – Logging facilities, which keep track of current state of transactions and database changes. – Checkpoint facility, which enables updates to database in progress to be made permanent. – Recovery manager, which allows DBMS to restore database to consistent state following a failure. 83 © Pearson Education Limited 1995, 2005 83 Log File  Contains information about all updates to database: – Transaction records. – Checkpoint records.  Often used for other purposes (for example, auditing). 84 © Pearson Education Limited 1995, 2005 84 Log File  Transaction records contain: – Transaction identifier. – Type of log record, (transaction start, insert, update, delete, abort, commit). – Identifier of data item affected by database action (insert, delete, and update operations). – Before-image of data item. – After-image of data item. – Log management information. 85 © Pearson Education Limited 1995, 2005 85 Sample Log File 86 © Pearson Education Limited 1995, 2005 86 Log File  Log file may be duplexed or triplexed.  Log file sometimes split into two separate random-access files.  Potential bottleneck; critical in determining overall performance. 87 © Pearson Education Limited 1995, 2005 87 Checkpointing Checkpoint Point of synchronization between database and log file. All buffers are force-written to secondary storage.  Checkpoint record is created containing identifiers of all active transactions.  When failure occurs, redo all transactions that committed since the checkpoint and undo all transactions active at time of crash. 88 © Pearson Education Limited 1995, 2005 88 Checkpointing  In previous example, with checkpoint at time tc, changes made by T2 and T3 have been written to secondary storage.  Thus: – only redo T4 and T5, – undo transactions T1 and T6. 89 © Pearson Education Limited 1995, 2005 89 Recovery Techniques  If database has been damaged: – Need to restore last backup copy of database and reapply updates of committed transactions using log file.  If database is only inconsistent: – Need to undo changes that caused inconsistency. May also need to redo some transactions to ensure updates reach secondary storage. – Do not need backup, but can restore database using before- and after-images in the log file. 90 © Pearson Education Limited 1995, 2005 90 Main Recovery Techniques  Three main recovery techniques: – Deferred Update – Immediate Update – Shadow Paging 91 © Pearson Education Limited 1995, 2005 91 Deferred Update  Updates are not written to the database until after a transaction has reached its commit point.  If transaction fails before commit, it will not have modified database and so no undoing of changes required.  May be necessary to redo updates of committed transactions as their effect may not have reached database. 92 © Pearson Education Limited 1995, 2005 92 Immediate Update  Updates are applied to database as they occur.  Need to redo updates of committed transactions following a failure.  May need to undo effects of transactions that had not committed at time of failure.  Essential that log records are written before write to database. Write-ahead log protocol. 93 © Pearson Education Limited 1995, 2005 93 Immediate Update  If no “transaction commit” record in log, then that transaction was active at failure and must be undone.  Undo operations are performed in reverse order in which they were written to log. 94 © Pearson Education Limited 1995, 2005 94 Shadow Paging  Maintain two page tables during life of a transaction: current page and shadow page table.  When transaction starts, two pages are the same.  Shadow page table is never changed thereafter and is used to restore database in event of failure.  During transaction, current page table records all updates to database.  When transaction completes, current page table becomes shadow page table. 95 © Pearson Education Limited 1995, 2005 95 Advanced Transaction Models  Protocols considered so far are suitable for types of transactions that arise in traditional business applications, characterized by: – Data has many types, each with small number of instances. – Designs may be very large. – Design is not static but evolves through time. – Updates are far-reaching. – Cooperative engineering. 96 © Pearson Education Limited 1995, 2005 96 Advanced Transaction Models  May result in transactions of long duration, giving rise to following problems: – More susceptible to failure - need to minimize amount of work lost. – May access large number of data items - concurrency limited if data inaccessible for long periods. – Deadlock more likely. – Cooperation through use of shared data items restricted by traditional concurrency protocols. 97 © Pearson Education Limited 1995, 2005 97 Advanced Transaction Models  Look at five advanced transaction models: – Nested Transaction Model – Sagas – Multi-level Transaction Model – Dynamic Restructuring – Workflow Models 98 © Pearson Education Limited 1995, 2005 98 Nested Transaction Model  Transaction viewed as hierarchy of subtransactions.  Top-level transaction can have number of child transactions.  Each child can also have nested transactions.  In Moss’s proposal, only leaf-level subtransactions allowed to perform database operations.  Transactions have to commit from bottom upwards.  However, transaction abort at one level does not have to affect transaction in progress at higher level. 99 © Pearson Education Limited 1995, 2005 99 Nested Transaction Model  Parent allowed to perform its own recovery: – Retry subtransaction. – Ignore failure, in which case subtransaction non-vital. – Run contingency subtransaction. – Abort.  Updates of committed subtransactions at intermediate levels are visible only within scope of their immediate parents. 100 © Pearson Education Limited 1995, 2005 100 Nested Transaction Model  Further, commit of subtransaction is conditionally subject to commit or abort of its superiors.  Using this model, top-level transactions conform to traditional ACID properties of flat transaction. 101 © Pearson Education Limited 1995, 2005 101 Example of Nested Transactions 102 © Pearson Education Limited 1995, 2005 102 Nested Transaction Model - Advantages  Modularity - transaction can be decomposed into number of subtransactions for purposes of concurrency and recovery.  Finer level of granularity for concurrency control and recovery.  Intra-transaction parallelism.  Intra-transaction recovery control. 103 © Pearson Education Limited 1995, 2005 103 Emulating Nested Transactions using Savepoints An identifiable point in flat transaction representing some partially consistent state.  Can be used as restart point for transaction if subsequent problem detected.  During execution of transaction, user can establish savepoint, which user can use to roll transaction back to.  Unlike nested transactions, savepoints do not support any form of intra-transaction parallelism. 104 © Pearson Education Limited 1995, 2005 104 Sagas “A sequence of (flat) transactions that can be interleaved with other transactions”.  DBMS guarantees that either all transactions in saga are successfully completed or compensating transactions are run to undo partial execution.  Saga has only one level of nesting.  For every subtransaction defined, there is corresponding compensating transaction that will semantically undo subtransaction’s effect. 105 © Pearson Education Limited 1995, 2005 105 Sagas  Relax property of isolation by allowing saga to reveal its partial results to other concurrently executing transactions before it completes.  Useful when subtransactions are relatively independent and compensating transactions can be produced.  May be difficult sometimes to define compensating transaction in advance, and DBMS may need to interact with user to determine compensation. 106 © Pearson Education Limited 1995, 2005 106 Multi-level Transaction Model Closed nested transaction - atomicity enforced at the top-level. Open nested transactions - allow partial results of subtransactions to be seen outside transaction.  Saga model is example of open nested transaction.  So is multi-level transaction model where tree of subtransactions is balanced.  Nodes at same depth of tree correspond to operations of same level of abstraction in DBMS. 107 © Pearson Education Limited 1995, 2005 107 Multi-level Transaction Model  Edges represent implementation of an operation by sequence of operations at next lower level.  Traditional flat transaction ensures no conflicts at lowest level (L0).  In multi-level model two operations at level Li may not conflict even though their implementations at next lower level Li-1 do. 108 © Pearson Education Limited 1995, 2005 108 Example - Multi-level Transaction Model 109 © Pearson Education Limited 1995, 2005 109 Example - Multi-level Transaction Model T7: T71, which increases balx by 5 T72, which subtracts 5 from baly T8: T81, which increases baly by 10 T82, which subtracts 2 from balx  As addition and subtraction commute, can execute these subtransactions in any order, and correct result will always be generated. 110 © Pearson Education Limited 1995, 2005 110 Dynamic Restructuring  To address constraints imposed by ACID properties of flat transactions, two new operations proposed: split_transaction and join_transaction.  split-transaction - splits transaction into two serializable transactions and divides its actions and resources (for example, locked data items) between new transactions.  Resulting transactions proceed independently. 111 © Pearson Education Limited 1995, 2005 111 Dynamic Restructuring  Allows partial results of transaction to be shared, while still preserving its semantics.  Can be applied only when it is possible to generate two transactions that are serializable with each other and with all other concurrently executing transactions. 112 © Pearson Education Limited 1995, 2005 112 Dynamic Restructuring  Conditions that permit transaction to be split into A and B are: –.AWriteSet  BWriteSet  BWriteLast. If both A and B write to same object, B’s write operations must follow A’s write operations. –.AReadSet  BWriteSet = . A cannot see any results from B. –.BReadSet  AWriteSet = ShareSet. B may see results of A. 113 © Pearson Education Limited 1995, 2005 113 Dynamic Restructuring  These guarantee that A is serialized before B.  However, if A aborts, B must also abort.  If both BWriteLast and ShareSet are empty, then A and B can be serialized in any order and both can be committed independently. 114 © Pearson Education Limited 1995, 2005 114 Dynamic Restructuring  join-transaction - performs reverse operation, merging ongoing work of two or more independent transactions, as though they had always been single transaction. 115 © Pearson Education Limited 1995, 2005 115 Dynamic Restructuring  Main advantages of dynamic restructuring are:  Adaptive recovery.  Reducing isolation. 116 © Pearson Education Limited 1995, 2005 116 Workflow Models  Has been argued that above models are still not powerful to model some business activities.  More complex models have been proposed that are combinations of open and nested transactions.  However, as they hardly conform to any of ACID properties, called workflow model used instead.  Workflow is activity involving coordinated execution of multiple tasks performed by different processing entities (people or software systems). 117 © Pearson Education Limited 1995, 2005 117 Workflow Models  Two general problems involved in workflow systems: – specification of the workflow, – execution of the workflow.  Both problems complicated by fact that many organizations use multiple, independently managed systems to automate different parts of the process. 118 © Pearson Education Limited 1995, 2005 118 Chapter 19 Security Transparencies © Pearson Education Limited 1995, 2005 1 Chapter 19 - Objectives  The scope of database security.  Why database security is a serious concern for an organization.  The type of threats that can affect a database system.  How to protect a computer system using computer-based controls.  The security measures provided by Microsoft Office Access and Oracle DBMSs.  Approaches for securing a DBMS on the Web. 2 © Pearson Education Limited 1995, 2005 2 Database Security  Data is a valuable resource that must be strictly controlled and managed, as with any corporate resource.  Part or all of the corporate data may have strategic importance and therefore needs to be kept secure and confidential. 3 © Pearson Education Limited 1995, 2005 3 Database Security  Mechanisms that protect the database against intentional or accidental threats.  Security considerations do not only apply to the data held in a database. Breaches of security may affect other parts of the system, which may in turn affect the database. 4 © Pearson Education Limited 1995, 2005 4 Database Security  Involves measures to avoid: – Theft and fraud – Loss of confidentiality (secrecy) – Loss of privacy – Loss of integrity – Loss of availability 5 © Pearson Education Limited 1995, 2005 5 Database Security  Threat – Any situation or event, whether intentional or unintentional, that will adversely affect a system and consequently an organization. 6 © Pearson Education Limited 1995, 2005 6 Summary of Threats to Computer Systems 7 © Pearson Education Limited 1995, 2005 7 Typical Multi-user Computer Environment 8 © Pearson Education Limited 1995, 2005 8 Countermeasures – Computer-Based Controls  Concerned with physical controls to administrative procedures and includes: – Authorization – Access controls – Views – Backup and recovery – Integrity – Encryption – RAID technology 9 © Pearson Education Limited 1995, 2005 9 Countermeasures – Computer-Based Controls  Authorization – The granting of a right or privilege, which enables a subject to legitimately have access to a system or a system’s object. – Authorization is a mechanism that determines whether a user is, who he or she claims to be. 10 © Pearson Education Limited 1995, 2005 10 Countermeasures – Computer-Based Controls  Access control – Based on the granting and revoking of privileges. – A privilege allows a user to create or access (that is read, write, or modify) some database object (such as a relation, view, and index) or to run certain DBMS utilities. – Privileges are granted to users to accomplish the tasks required for their jobs. 11 © Pearson Education Limited 1995, 2005 11 Countermeasures – Computer-Based Controls  Most DBMS provide an approach called Discretionary Access Control (DAC).  SQL standard supports DAC through the GRANT and REVOKE commands.  The GRANT command gives privileges to users, and the REVOKE command takes away privileges. 12 © Pearson Education Limited 1995, 2005 12 SQL Security Model  SQL security model implements DAC based on – users: users of database - user identity checked during login process; – actions: including SELECT, UPDATE, DELETE and INSERT; – objects: tables (base relations), views, and columns (attributes) of tables and views  Users can protect objects they own – when object created, a user is designated as ‘owner’ of object – owner may grant access to others – users other than owner have to be granted privileges to access object 13 SQL Security Model  Components of privilege are – grantor, grantee, object, action, grantable – privileges managed using GRANT and REVOKE operations – the right to grant privileges can be granted  Issues with privilege management – each grant of privileges is to an individual or to “Public” – makes security administration in large organizations difficult – individual with multiple roles may have too many privileges for one of the roles – SQL3 is moving more to role based privileges 14 SQL Security Model  Authentication & identification mechanisms – CONNECT USING – DBMS may chose OS authentication – or its own authentication mechanism » Kerberose » PAM 15 SQL Security Model  Access control through views – many security policies better expressed by granting privileges to views derived from base relations – example CREATE VIEW AVSAL(DEPT, AVG) AS SELECT DEPT, AVG(SALARY) FROM EMP GROUP BY DEPT » access can be granted to this view for every dept mgr – example CREATE VIEW MYACCOUNT AS SELECT * FROM Account WHERE Customer = current_user() » view containing account info for current user 16 SQL Security Model  Advantages of views – views are flexible, and allow access control to be defined at a description level appropriate to application – views can enforce context and data-dependent policies – data can easily be reclassified 17 SQL Security Model  Disadvantages of views – access checking may become complex – views need to be checked for correctness (do they properly capture policy?) – completeness and consistency not achieved automatically - views may overlap or miss parts of database – security-relevant part of DBMS may become very large 18 SQL Security Model  Inherent weakness of DAC – DAC allows subject to be written to any other object which can be written by that subject – trojan horses to copy information from one object to another 19 Countermeasures – Computer-Based Controls  DAC while effective has certain weaknesses. In particular an unauthorized user can trick an authorized user into disclosing sensitive data.  An additional approach is required called Mandatory Access Control (MAC). 20 © Pearson Education Limited 1995, 2005 20 Countermeasures – Computer-Based Controls  MAC based on system-wide policies that cannot be changed by individual users.  Each database object is assigned a security class and each user is assigned a clearance for a security class, and rules are imposed on reading and writing of database objects by users.  Top secret (TS), secret (S), confidential (C), and unclassified (U)  TS > S > C > U 21 © Pearson Education Limited 1995, 2005 21 Countermeasures – Computer-Based Controls  MAC determines whether a user can read or write an object based on rules that involve the security level of the object and the clearance of the user. These rules ensure that sensitive data can never be ‘passed on’ to another user without the necessary clearance.  The SQL standard does not include support for MAC. 22 © Pearson Education Limited 1995, 2005 22 Popular Model for MAC called Bell-LaPadula  two restrictions on all reads and writes of database objects – Simple Security Property - Subject S is allowed to read object O only if class (S) >= class (O) (user with TS clearance can read a relation with C clearance, but a user with C clearance cannot read a relation with TS classification) – *_Property - Subject S is allowed to write object O only if class (S)

Use Quizgecko on...
Browser
Browser