CoE 167 Computing Systems LE 1 - Reviewer PDF

Summary

This document provides an introduction to distributed systems, discussing their architecture, resource sharing, operations, and various aspects of their design. It covers topics like transparency, consistency, fault tolerance, and security in distributed environments.

Full Transcript

○ Resource sharing CoE 167 Examples: Computing Systems Club-based shared LE 1 - Reviewer storage an...

○ Resource sharing CoE 167 Examples: Computing Systems Club-based shared LE 1 - Reviewer storage and files P2P assisted multimedia streaming lec01 Shared mail service CHAPTER 01: INTRODUCTION Shared web hosting Views on realizing distributed system: OBSERVATION: Network == ○ Integrative view - consolidate existing computer resources/ connecting existing NCS to a ○ Distribution transparency larger system Transparency - hide the fact that ○ Expansive view - scale up the its processes and resources are operations/extended with additional physically distributed – across computers multiple computers (possibly large distances) Distributed System Middleware layer – ○ Network computer system (NCS) which application/software providing a processes and resources are sufficiently common operating system spread across computers. Types of Transparency (what it Decentralized System hides?) ○ NCS – necessarily spread across Access - differences in multiple computers data representation; how it is accessed Common misconception Location ○ Centralized solutions do not scale Relocation - may be moved to another ○ Centralized solutions have single point location; redundancy of failure Migration - how did it Single point of failure – easier to move manage and to make more Replication robust Concurrency - may be shared by several Perspective on distributed systems independent users ○ Architecture Failure - failure and Common organizations recovery ○ Process Degree of Transparency What kind and their relationship (1) Full distribution may be too ○ Communication much Facilities for exchanging data Communication ○ Coordination latencies – can’t be Application-independent hidden algorithms (content of Completely hiding packets/communications) failures is impossible ○ Naming Full transparency will How to identify resources? cost performance (DNS) ○ Replicas with ○ Consistency and Replication exactly Performance requires of data up-to-date (most complex) ○ Immediate ○ Fault Tolerance flushing write Keep running – in partial failures operations ○ Security (2) Exposing may be good Authorized access to resources Location-based services Not failure-free system but with Different time zones toleration – no fault design is a OBSERVATION: Diff failure already techniques in layer between Supposedly add-on but now applications and OS: should be considered in each Middleware Layer part ○ Openness Open distributed system – Overall design goals system that offers components that can easily be used by, or - Mean Time to Repair (MTTR) - ave time integrated to other systems to repair C (often consist of components - Mean Time Between Failures (MTBF) - from elsewhere) MTTF + MTTR Systems: - Terminology: Well-defined interfaces Interoperate Failure, error, fault ○ Define Failure C not living up to Crashed program expected its specs process Error Part of a C that Programming bug Portability of apps can lead to a Easily extensible failure Policies vs. Mechanisms Fault Cause of an error Sloppy Policies programmer Level of consistency for Handling faults client-cached data? Fault prevention Prevent Don’t hire sloppy Operations, occurrence of a programmers downloaded code? fault QoS? Fault tolerance Build a C and Build each C by 2 Level of secrecy? make it mask independent Mechanisms occurrence of a programmers Dynamic setting of fault caching policies Fault removal Reduce the Get rid of sloppy Diff levels of trust for presence, number, programmers mobile code or seriousness of Adjustable QoS param a fault Diff encryption algo OBSERVATION: stricter Fault forecasting Estimate current Estimate how a separation bet policy and presence, future recruiter is doing mechanism = more proper incidence and when it comes to mechanism = many config consequences of hiring sloppy param & complex faults programmer management Hard-coding policies → ○ Security simplifies management & OBSERVATION: not secure reduce complexity but less distributed system = not flexibility dependable ○ Dependability What we need: Component (processes or Confidentiality – channel) may depend to information disclosed to another component to provide authorized parties only services to client Integrity – alteration to Component C depends on C* if assets of a system an the correctness of c’s behavior be made only in an depends on the correctness of authorized way C*’s behavior Authentication: verifying Requirements related to correctness of a claimed identity dependability: Authorization: proper access rights to identified entity Availability Readiness of usage Trust: one entity is assured that another will perform particular Reliability Continuity of service actions according to a specific delivery expectation Maintainability How easy can a failed Security Mechanism system be repaired Keeping it simple – Safety Very low probability of encrypting and catastrophes decrypting data using security keys Reliability, R(t) vs. Availability Notation: K(data) → use - Component C key K to encrypt/decrypt - Traditional Metrics: data - Mean Time To Failure (MTTF) - ave time until C fail Symmetric cryptosystem 𝑑𝑎𝑡𝑎 = 𝐷𝐾(𝐸𝐾(𝑑𝑎𝑡𝑎)) then Techniques for scaling: 𝐷𝐾 = 𝐸𝐾 (1) Hide comms latency Asymmetric cryptosystem Public key PK(data) (a) Asynch comms Private (secret) key (b) Separate handler for incoming response SK(data) (c) Not every apps fit this model (2) Facilitate solution by moving computations to Security hashing client Secure hash functions: (3) Partition data and computations across multiple H(data) machines Practical digital signatures (a) Move computation to client (b) Decentralized naming services ○ Scalability (c) Decentralized information systems OBSERVATION: “scalable” (4) Replication and caching → make copies of data but why do the systems at diff machines actually scale? (a) Replicated file servers and database At least 3 components: (b) Mirrored websites (c) Web caches Components Definition Problems (d) File caching Size scalability # of users - Computational Cons: inconsistencies, requires global or capacity, limited by synchronization processes CPU OBSERVATION: tolerate - Storage capacity, inconsistencies → reduce global transfer rate bet CPU synchronization but application and disks dependent - Network bet. user and centralized service Geographically Max - No LAN to WAN – scalability distance distributed systems bet nodes assume synchronous client-server interactions. Cons: Latency - WAN links unreliable - Lack of multipoint communication → solution: separate naming and directory services Administrative # of admin - Conflicting policies scalability domains (usage, payment, management, security) - Examples: - Computational grids: expensive resources bet. diff domains - Shared equipment - Exception - File-sharing systems - Peer2Peer telephony - Peer assisted audio streaming - More on end users collab Formal analysis lec02 SIMPLE CLASSIFICATION OF DISTRIBUTED SYSTEMS (1) High-performance distributed computing (2) Distributed information systems (3) Pervasive systems High-performance distributed computing OBSERVATION: HPDC started with parallel computing Multiprocessor and multicore vs multicomputer Distributed shared memory systems Multiprocessors >> easy to program than multipcomputers (problems when increasing # of processors) → shared-memory model on top of multicomputer ○ Map all main-memory pages into one single virtual address space

Use Quizgecko on...
Browser
Browser