Summary

This document provides an overview of cloud and edge computing concepts, including details on different types of clouds, data center networks, and virtualization. The document discusses a range of key topics related to these technologies, from the background and driving forces behind their development to their practical implementation and operational aspects.

Full Transcript

Topic Cloud Computing and Edge Computing BOOKS : [ 1 ] N A DER F. MI R , COMP UT E R AN D COMMUN I CATI ON N E TWOR KS - CHAPT ER 1 6. [ 2 ] E DG E CLOUD OP E R ATIONS: A SYST EMS A P P ROACH - S EC T IO NS - 1. 3 A N D 2. 1....

Topic Cloud Computing and Edge Computing BOOKS : [ 1 ] N A DER F. MI R , COMP UT E R AN D COMMUN I CATI ON N E TWOR KS - CHAPT ER 1 6. [ 2 ] E DG E CLOUD OP E R ATIONS: A SYST EMS A P P ROACH - S EC T IO NS - 1. 3 A N D 2. 1. 1 Topic Outline ❑ Background ❑Cloud computing and data centers. ❑ Data center networks (DCNs). ❑ Network virtualization. ❑ Example Cloud and Edge Computing implementation. 2 Background CHAPTER -16- CLOUD COMPUTING AND NETWORK VIRTUALIZATION 3 Cloud Computing and Data Centers SECTION 16.1 4 Cloud Computing And Data Centers ❑ A cloud computing or a cloud-based system is a network-based computing system in which clients use a shared pool of configurable computing resources. ❑ Cloud-based systems provide services from large data centers (DCs). Such services have radically changed the way in which computer services are built, managed, and delivered. ❑ The term cloud computing implies a variety of computing concepts that involve a large number of servers connected to a network, typically to the Internet. 5 Driving Forces Behind Birth of Cloud Computing ❑ The driving forces behind the birth of cloud computing were: The possibility of creating overcapacity at large corporate data centers The falling cost of storage The ubiquity of broadband and wireless networking Significant improvements in networking technologies ❑ Cloud computing, on the other hand, is equivalent to “distributed computing over a network” that runs programs or applications on numerous connected host servers at the same time. A host server in a data center is typically called a blade. 6 Cloud Computing and Data Centers ❑ A blade includes a CPU, memory, and disk storage. Blades are normally stacked in racks, with each rack having an average of 30 blades. These racks of host blades are also known as server racks. ❑ A key factor for the success of cloud computing is the inclusion of virtualization in its structure. ❑ Cloud computing is a method of providing computing services from a large, highly “virtualized” data center to many independent users utilizing shared applications. ❑ In data centers, the services are provided by “physical” servers and “virtual” servers. 7 Cloud Computing And Data Centers ❑ A virtual server is a machine that emulates a server through software running on one or more real servers. Such virtual servers do not physically exist and can therefore be moved around and scaled up or down without affecting the end user. ❑ From the application point of view, cloud computing is an on-demand, self-service, and metered service at various quality levels. ❑ Cloud data centers provide a variety of computing resources, attributes, and services such as database storage, computing, e-mail, voice, multimedia, and enterprise applications. 8 Benefits of Cloud Computing from Application Standpoint ❑ From the application standpoint, cloud computing provides numerous benefits, including the following: Infrastructure-less (organizations) Flexibility Availability Use of platforms (client users) Commodified and on-demand Availability of an application programing interface (API) 9 Types of Cloud ❑There are two main types of clouds: 1. public cloud, in which resources are dynamically available for on- demand and self-service use by the public through Web applications and open APIs. 2. private cloud, which is simply the on-site cloud of a corporation. ❑A cloud can also be a hybrid cloud consisting of some portion of computing resources located on-site (on the premises) and some located off-site (in the public cloud). ❑We can also have a community cloud that is formed when several organizations with similar requirements share a common infrastructure. 10 An Overview of a Cloud Computing System 11 An Overview of a Cloud Computing System ❑ A data center consists of numerous server racks called “server farm”. ❑ Depending on the application, a data center can provide a variety of different computing, storage, or other services. ❑An important element of a data center is the cloud storage system. The server racks can also include a storage system to store public or private data. 12 An Overview of a Cloud Computing System ❑ Any data center has its own DCN that interconnects its host machines (servers) with each other and connects the data center itself to the Internet. ❑a Load balancer, which is essentially a server, monitors the traffic coming into the data center and distributes the incoming traffic load evenly over the servers and storage systems. 13 Cloud Computing Service Models ❑ The cloud computing environment provides services that customers can use remotely and online. ❑ The advantage of such a service model is that any software or hardware service can be rented by a customer remotely while the cloud management upgrades it frequently. ❑ The most common models of services and service developments in cloud computing are: Software as a service (SaaS): is a software delivery model in cloud computing by which software and associated data are centrally hosted in the cloud. (on demand software) Infrastructure as a service (IaaS): outsource the use of any equipment needed to support operations such as storage, servers, and networking components. Platform as a service (PaaS): provides a computing platform and a solution stack as a service. 14 Data Centers – Design Principles ❑ Reliability. Individuals, enterprises, service providers, and content providers rely heavily on data in their selected data centers to run their businesses. ❑ Load balancing: A load balancer in a data center distributes the traffic among the links inside the data center as evenly as possible. ❑ Any data center runs a huge variety of applications and services, and thus an effective routing protocol should guarantee the performance of each application. This can be done by utilizing the link capacity appropriately. ❑ Energy consumption control is an important design factor. It is now well known that data centers consume substantial energy. 15 Virtualization In Data Centers ❑ Virtualization in data centers, and generally in cloud computing, is the act of creating a virtual, rather than an actual, version of a computing component, such as an operating system, a server, or a storage device. ❑ One can see that the virtualization of a system is the act of decoupling an infrastructure service from the system’s physical components. ❑ Virtualization in a more general interpretation has impacted most aspects of our life. Our method of online shopping, education, or games and entertainment are all partially or fully virtual. 16 Need For Virtualization Of Resources 1. Sharing of resources: - When a resource, such as a server, is too large for a user or a network entity, the resource can be partitioned into multiple virtual pieces, each having similar properties as the original resource. - This way a large-scale resource can be shared virtually among multiple users or entities. - For example, a host can be virtually partitioned to multiple virtual machines for multiple users. 17 Need For Virtualization Of Resources 2. Aggregation of resources: - There are other circumstances under which a resource might be too small for a user or a network entity. - In such cases, a larger resource can be virtually engineered to fill the need. - For example, numerous inexpensive disks can be aggregated to make up a large reliable storage system. 18 Need For Virtualization Of Resources 3. Resource management: - The implementation of virtualization on resources or devices makes the management of devices and their networking easier because they can be managed using software. There are other reasons. For example, sharing a resource among users requires that they be isolated from each other. The provision of several attributes and services in a large enterprise cloud service system requires a significant number of servers, networking devices, and storage resources. This is only achievable through extensive use of resource virtualization. 19 Virtualization In Data Centers ▪Virtual Machines (VMs) ▪ A virtual machine (VM): is a simulation of a computing machine, such as a desktop, laptop, or server, decoupled from hardware and providing an execution environment to clients by using well-defined protocols. ▪ Multiple VMs can run on a single physical machine. In contrast, multiple physical machines can form a single VM. 20 Benefits Of Machine Virtualization The following are the benefits of machine virtualization: ❑ Effective use of a machine: Multiple independent users share computing and infrastructure facilities. ❑ Lower system maintenance cost: The maintenance cost of servers can be reduced as each machine can be utilized close to its optimal capacity, avoiding any unnecessary unused active machines. ❑ Rapid and dynamic provisioning of new features: Clearly, the agility of creating multiple VMs in a physical machine allows new features to be added to a system rapidly and more effectively. ❑ Load balancing at and beyond data centers: The computing load can be distributed evenly in or beyond a data center. 21 Challenges Of Machine Virtualization ❑ Virtualization of machines generates several other networking challenges at both layer 2 (the data link layer) and layer 3 (the network layer). ❑ One challenge attracting researchers’ attention is the impact of virtualization on transport layer protocols. ❑ Virtualization dramatically deteriorates the performance of transport layer protocols to the point that throughput of both TCP and UDP becomes unstable. ❑ The end-to-end delay of packets is large even if the network load is light. ❑ The basic reason for the unstable throughput and large delay is simply the larger than normal latency of scheduling hypervisors. 22 Virtualization In Data Centers ❑ Hypervisor ❑ A hypervisor is a monitoring and managing mechanism for virtual machines. ❑ It consists of software, hardware, or firmware (software/hardware) that creates and runs virtual machines. ❑ A real computer (machine) on which a hypervisor is running one or more virtual machines is defined as host machine, while each virtual machine is called a guest machine. ❑ The hypervisor provides the guest operating systems with a virtual operating platform and administers the execution of the guest operating systems. Multiple instances of operating systems in the real machine may share the virtualized hardware resources using the hypervisor. 23 Data center networks (DCNs) SECTION 16.2 24 Data Center Networks (DCNs) ❑ Typical DCNs are currently based on routers and layer 2 and 3 switches connected in a certain topology. ❑ The servers of each rack are directly interconnected using one tier of top-of-rack (ToR) or edge switches that are layer 2 type switches. ❑ Each host (server) in a server rack has a network interface card (NIC) that connects it to its ToR switch while each host is also assigned its own data-center internal IP address. ❑ Edge switches are in turn connected through 0 to k tiers of layer 2 or 3 aggregate switches. ❑ Finally, aggregate switches are interconnected to layer 3 type core switches. 25 Data Center Networks (DCNs) ❑ Considering such a hierarchical architecture in a data center network, scaling up a data center to a substantially larger number of hosts is made possible. ❑ Two types of traffic flows are supported by a DCN: 1) Traffic flow between clients residing outside of the data center and the data center hosts (servers). 2) Traffic flow among data center hosts. The border routers sitting on top of core switches handle connections between the data center servers to the clients residing in the rest of the Internet. 26 An Overview Of A Data Center Network As The Communication Component Of Data Servers 27 Load Balancer ❑ Load balancers (LBs) are adopted in data centers to balance traffic flow among servers. ❑ A load balancer distributes requests from clients over the hosts of a data center as evenly as possible. In other words, a load balancer decides which server must carry out the task of an individual request. ❑ A large data center offers many types of real-time applications concurrently to numerous clients. ❑ Imagine that numerous clients from outside a data center request a variety different applications. This would require the data center to find the right resource within its facility quickly and communicate back to the source swiftly and appropriately. 28 Load Balancer ❑ Larger data centers must have several load balancers, each one dedicated to specific application(s). ❑ Note that load balancing can, on the other hand, lead to congestion when large flows are present, and a limited number of load-balancing machines are operable. ❑ There are several methods in which load balancing can be implemented in data center networks. 29 Load Balancer - Round Robin Algorithm ❑ In the round robin algorithm, a load balancer does not collect and save the present state of the servers. ❑ Once a request for a certain type of application is received by the load balancer, it immediately distributes the incoming request to all the servers in the data center network. ❑ The algorithm assigns the request to servers one by one and in a round robin fashion until a free eligible server for the requested application is found. 30 Load Balancer - Least-Connect Algorithm ❑ The least-connect algorithm presents an alternative approach to load balancing in data centers. ❑ In this method, the load balancer performs the traffic load distribution based on its knowledge of the number of active connections to each server in a data center. ❑ The load balancer must monitor the number of open connections to its servers, and then choose servers with the least number of active connections as eligible ones. ❑ The least-connect algorithm is more complicated than the round robin algorithm, but it is more efficient in large-scale data centers. 31 DCN Architectures ❑ The largest portion of the cost incurred to set up a networking infrastructure for a data center is attributed to switches and routers, load balancers, and data center links. ❑ Among the items in this list, switches and routers are the most expensive ones. Smart design of the networking portion of cloud computing therefore plays an important role in reducing network cost. ❑ Three networking structures used in DCNs are: 1. Server-routed networking scheme 2. Switch-routed networking scheme 3. Fully-connected optical networking scheme 32 A Data Center Network Using the Server-Routed Networking Scheme with Two Layers Of Switching Core S10 S11 S12 S13 Switches (Layer 3 Switches ) Load Balancer ToR (Edge) Switches S00 S02 S03 (Layer 2 S01 Switches ) SE00 SE00 SE00 SE00 SE01 SE01 SE01 SE01 SE02 SE02 SE02 SE02 SE03 SE03 SE03 SE03 SR00 SR01 SR02 SR03 Server Racks SE00 (in SR00) – S10 – SE00 (in SR02) – S02 – SE01; SE00 (in SR00) – S00 – SE01 (in SR00) – S11 – SE01; 33 Server-Routed Networking Scheme ❑ In the server-routed networking scheme, servers or server racks in data centers act as end hosts and also as relay nodes. ❑ In this scheme, servers are configured with multiple ports, and switches connect an identical number of server racks. In the server-routed networking scheme, there is no aggregate switch tier in the DCN structure. ❑ The network routing configuration in the server-routed networking scheme has a recursively defined structure. ❑ In this algorithm, we assume m server racks, each having k servers, and each server rack being connected to a ToR switch; as result there are (m) k-port ToR layer 2 switches. ❑ Two servers are neighbors if they connect to the same ToR switch. 34 Server-Routed Networking Scheme - Example Q) Show the structure of a server-routed networking scheme of a cloud computing center where k = 4, and then indicate the path of routing from server SE00 in server rack SR00 to server SE01 in server rack SR02. Solution: Figure on next slide shows the structure of the network when k = 4 resulting in a ToR switch having 4 ports. There are several paths connecting the two servers. Two such paths can be either: SE00 (in SR00) – S10 – SE00 (in SR02) – S02 – SE01; or the other path: SE00 (SR00) – S00 – SE01 (in SR00) – S11 – SE01 (in SR02). 35 Switch-Routed Networking Scheme ❑ In switch-routed networking, the interconnection topology among the switches is regular. ❑ Edge switches directly connect to end servers at server racks, and core switches directly connect to the Internet. The aggregate switches are the middle layer switches. 36 Switch-Routed Networking Scheme ❑ Each server has its actual MAC address and physical identification number (PIN). ❑ The information of a sever is encoded into its PIN in the form of 6-field MAC address L6-L5-L4-L3-L2-L1 as follows: ▪ L1 (16 bits): represents the ID of the virtual machine (VM) on a physical machine. ▪ L2 (8 bits): represents the edge switch port number the host is connected to. ▪ L3 (8 bits): represents the position of the edge switch column in L1. 37 Switch-Routed Networking Scheme ▪ L4 (16 bits): reflects the ID number of the aggregate switch block. An aggregate switch block refers to a block of aggregate switches (such as S10 and S11) that are connected to a server rack such as SR00. In a sense L4 represents the server rack number. ▪ L5 and L6 are reserved for network expansion. For example: a PIN of 00:00:01:00:01:198 means the hundred and ninety-ninth (L1=198) virtual machine in a server host connected to the second (L2=01) port of the edge switch, where the edge switch is located in the first (L3=00) switching column in the second (L4=01) aggregate switch block, which is the second server rack (SR01). 38 Switch-Routed Networking Scheme ❑ Routing from outside of the DCN to the data center using switch-routed networking topology starts with core switches forwarding a packet based on its destination PIN address so that the core switch inspects L4 bits of a new arriving packet to make a decision on an output port. ❑ When the packet reaches an aggregate switch, it checks its PIN to find out the L2 bits in order to extract the edge switch port number. ❑ The PIN address is also used for communications among the data center hosts, from a host to a core switch and from the core switch to another host in the center. 39 Fully Connected Optical Networking Scheme ❑ A fully connected optical networking scheme deploys all-optical switches in the core switching tier of DCNs. ❑ In high-speed communication applications, nonoptical switching schemes suffer from deficiencies such as lower throughput, high latency, and high power consumption. ❑ The inclusion of an all-optical switching scheme as the core switching engine of a DCNs that is directly attached to the ToR switching tier is an alternative solution to the deficiencies of general DCNs. 40 A Data Center Network Using Fully Connected Optical Networking Scheme 41 NETWORK VIRTUALIZATION SECTION 16.3 42 Network Virtualization ❑ Network virtualization is the act of decoupling networking services from network infrastructures. ❑ Network virtualization offers a powerful approach to the paradigm of shared network infrastructure. ❑ The important role that networking plays in cloud computing calls for an improved, combined control and optimization of networking and computing resources in a cloud environment. 43 Network Virtualization ❑ Two other issues that advanced converged networking and cloud computing must overcome are: The general delay in computer networks The lack of seamless connections often experienced in wireless networks ❑ Network virtualization enables network operators to compose and operate multiple independent and application-specific virtual networks that share a common physical infrastructure. 44 Network Virtualization ❑ To achieve this capability, the virtualization mechanism must guarantee isolation between coexisting virtual networks. ❑ In a virtual network, a service is described in a data structure, and exists entirely in a software abstraction layer, reproducing the service on any physical resource running the virtualization software. ❑ The configuration attributes of the service can be found in software with API interfaces, thereby unlocking the full potential of networking devices. 45 Network Virtualization Components ❑ Once a virtual network is decoupled from a physical network to its ultimate potential, the physical network configuration is simplified to provide packet forwarding service from one hypervisor to the next. ❑ The implementation details of physical packet forwarding are separated from the virtual network while both the virtual and physical network can evolve independently. ❑ In a virtual network, the network virtualization software produces: ▪ virtual links, ▪ virtual switches (vSwitches), ▪ virtual NICs (VNICs), ▪ virtual routers (vRouters), ▪ virtual load balancers (VLBs) ▪ logical firewalls. 46 Virtual Links ❑ Consider multiple hosts connected by a shared physical link. ❑ The link can then be viewed as a multichannel medium for each host to transmit its data through a channel. ❑ From the connectivity viewpoint, this channelization mechanism arranged over a physical link can be prevalently referred to as a “virtual link” over a physical link. 47 Virtual NICs ❑ A network interface card (NIC) is attached to any host or networking device. A NIC processes link layer (layer 2) protocol functions such as the ones for the Ethernet and WiFi protocols. ❑ Each top-of-rack (ToR) layer 2 switch or an aggregate switch in a data center uses a NIC for its connection. ❑ The need for a virtual NIC (VNIC) becomes apparent when a virtual machine in a data center moves from one subnet to another. 48 Virtual NICs ❑ In this case, the IP address of the virtual machine must change while its MAC address (IEEE 802 address) remains unchanged. ❑ Thus, when a virtual machine’s network connection crosses multiple layer 2 networks, the virtual machine needs to be given a new IP address. ❑ Any host or networking device uses at least one NIC for communication. When more than one virtual machine is active on the system, each virtual machine needs its own virtual NIC. 49 Network Interface Card (NIC) Virtualization Physical Machine VM1 VM2 VM3 VNIC 1 VNIC 2 VNIC 3 VEPA NIC Physical Port 50 Explanation Of Previous Figure ❑ The figure also shows that any VM is also associated with a VNIC housed in the physical machine. ❑ The switching and communication among VNICs are carried out by a virtual Ethernet port aggregator (VEPA), specified by the IEEE 802.1Qbg standard. ❑ As an alternative architecture, VNICs can also be housed in the physical NIC of the machine. ❑ This alternative approach would require significant software overhead, and VNICs may not be easily manageable externally. 51 Virtual Switches (vSwitches) ❑ In networking scenarios with a large number of hosts requiring connections to an outside system, the number of hosts is normally larger than the number of ports in a layer 2 switch. ❑ A certain software standard such as IEEE 802.1BR, called the bridge port extension (BPE) standard, can be used in physical switches to extend the number of ports to multiple virtual ports, resulting in a vSwitch. 52 Virtual Switches (vSwitches) ❑ A virtual switch does more than just forward frames; it can intelligently direct communication on the network by inspecting frames before passing them on. ❑ Virtual switching functions can be embedded right into their virtualization software, but a virtual switch can also be included in a machine attached to the physical switch as part of its firmware. ❑ Virtual switches can also perform the task of moving VMs across physical hosts; other intelligent software embedded in virtual switches, the integrity of a VM’s profile including its network and security settings can potentially be ensured. 53 An Overview of Virtual Switches (Vswitches) Connecting a Virtual Machine to a System Outside Its Physical Machine Physical Machine 1 Physical Machine 2 Physical Machine 3 VM1 VM2 VM3 VM1 VM2 VM3 VM1 VM2 VM3 VNIC 1 VNIC 2 VNIC 3 VNIC 1 VNIC 2 VNIC 3 VNIC 1 VNIC 2 VNIC 3 VEPA VEPA VEPA NIC NIC NIC Physical Switch Virtual Switch BPE 54 Edge Cloud Operations: A Systems Approach 1.3 CLOUD TECHNOLOGY AND 2.1 EDGE CLOUD 55 Introduction Clouds provide a set of tools for bringing up and operating scalable services, but how do you operationalize a cloud in the first place? 56 Introduction Few organizations have reason to instantiate a hyperscale datacenter, but deploying private edge clouds in an enterprise—and optionally connecting that edge to a datacenter to form a hybrid cloud—is becoming increasingly common. The term “edge cloud” is used to distinguish the focus from the “core”, which is the traditional domain of the hyperscale operators. The edge is more likely to be in an enterprise or an “Internet of Things” setting such as a factory. The edge is the place where the cloud services connect to the real world, e.g., via sensors and actuators, and where latency-sensitive services are deployed to be close to the consumers of those services. 57 Introduction The hyperscalers can manage (operationalize) the edge cloud, as an extension of their core datacenters. There is significant activity to provide such products, with Google’s Anthos, Microsoft’s Azure Arc, and Amazon’s ECS-Anywhere as prime examples. On the other hand, operationalizing a cloud is not so high that only a hyperscaler has the wherewithal to do it. It is possible to build a cloud—and all the associated lifecycle management and runtime controls that are required to operate it—using readily available open- source software packages. 58 Introduction This book describes what such a cloud management platform looks like. The book’s is to focus on the fundamental problems that must be addressed Design issues that are common to all clouds Then couple this conceptual discussion with specific engineering choices made while operationalizing a specific enterprise cloud. Our example is Aether, an ONF project to support 5G-enabled edge clouds as a managed service. 59 Introduction Aether has the following properties that make it an interesting use case to study: Aether starts with bare-metal hardware (servers and switches) deployed in edge sites (e.g., enterprises). This on-prem cloud can range in size from a partial rack to multi-rack cluster, assembled according to the best practices used in datacenters. Aether supports both “edge services” running on these on-prem clusters and “centralized services” running in commodity cloud datacenters (hybrid cloud). 60 Introduction Aether augments this edge cloud with 5G-Connectivity-as-a-Service, giving us a service that must be operationalized (in addition to the underlying cloud). The end result is that Aether provides a managed Platform-as-a-Service (PaaS). Aether is built entirely from open-source components. The only thing it adds is the “glue code” and “specification directives” required to make it operational. This means the recipe is fully reproducible by anyone. 61 Cloud Technology – Example Implementation ❑ Being able to operationalize a cloud starts with the building blocks used to construct the cloud in the first place. ❑This section summarizes the available technology, with the goal of identifying the baseline capabilities of the underlying system. 62 Cloud Technology – 1-Hardware Platform ❑ bare-metal servers and switches, built using merchant silicon chips (ARM or x86 processor chips and Tomahawk or Tofino switching chips) 63 Cloud Technology - Hardware Platform ❑The bare-metal boxes also include: ❑ A bootstrap mechanism (e.g., BIOS for servers and ONIE for switches). ❑ A remote device management interface (e.g., IPMI or Redfish). 64 Cloud Technology - Hardware Platform 1) A physical cloud cluster is then constructed with the hardware building blocks arranged as shown in Figure. 2) One or more racks of servers connected by a leaf-spine switching fabric. 3) The servers are shown above the switching fabric to emphasize that software running on the servers controls the switches. Example building block components used to construct a cloud, including commodity servers and switches, interconnected by a leaf-spine switching fabric. The Platform 65 Leaf-Spine Switching Fabric ▪ A datacenter switching fabric is a network subset of often designed according to a leaf-spine available spine switches topology. ▪ Each rack has a Top-of-Rack (ToR) switch that ToR (leaf) interconnects the servers in that rack, called switch the leaf switches of the fabric. Rack ▪ Each leaf switch then connects to a subset of available spine switches. Example of a leaf-spine switching fabric common to cloud datacenters and other clusters, such as on-premises edge clouds. 66 Leaf-Spine Switching Fabric ▪ Leaf switches connect to spine subset of available spine switches, with two requirements: switches 1. Any pair of racks, must have ToR (leaf) multiple paths between them. switch 2. Each rack-to-rack path must be Rack two-hops in distance (i.e., via a single intermediate spine switch). Example of a leaf-spine switching fabric common to cloud datacenters and other clusters, such as on-premises edge See SDN book section 2.2 clouds. 67 Cloud Technology - Hardware Platform In summary, the different platform mechanisms will be responsible for: (a) bringing up the platform and prepping it to host workloads. (b) managing the various workloads that need to be deployed on that platform. 68 Cloud Technology - Software Building Blocks ❑Four foundational software technologies are assumed, all running on the commodity processors in the cluster: 1. Linux provides isolation for running container (Virtual Machine) workloads. 2. Docker containers package software functionality. 1. leverages OS isolation APIs instantiate and run multiple containers. 3. Kubernetes instantiates and interconnects containers (container management system). 4. Helm charts specify how collections of related containers are interconnected to build applications. 69 Cloud Technology - Switching Fabric ❑ In a cloud one or more racks of servers connected by a leaf-spine switching fabric. This includes the switches. ❑ In this example implementation, it is assumed that the cloud is constructed using an SDN-based switching fabric, with a disaggregated control plane running in the same cloud as the fabric interconnects. 70 Cloud Technology - Switching Fabric ❑ The fabric has the following SDN software stack: ❑A Network OS hosts a set of control applications, including a control application that manages the leaf-spine switching fabric. We use ONOS as an open-source exemplar Network OS. ONOS, in turn, hosts the SD-Fabric control app. ❑A Switch OS runs on each switch, providing a northbound gNMI and gNOI interface through which the Network OS controls and configures each switch. We use Stratum as an open-source exemplar Switch OS. ONOS:….. 71 Edge Cloud - Architecture Under Architecture the following only two subsystems are presented: 1. Edge Cloud 2. Hybrid Cloud Aether is used to illustrate specific design choices. It is characterized as a Platform-as-a- Service (PaaS). 72 Edge Cloud - Architecture Overview of Aether as a hybrid cloud, with: 1) Edge apps and the 5G data plane (called local breakout) running on-prem 2) Various management and control-related workloads running in a central cloud. 73 Edge Cloud - Architecture Edge Cloud -ACE 74 Edge Cloud -ACE ❑ The Aether edge cloud, is called ACE (Aether Connected Edge), is a Kubernetes-based. ❑ It is a platform that consists of one or more server racks interconnected by a leaf-spine switching fabric, with an SDN control plane (denoted SD- Fabric) managing the fabric. 75 Hybrid Cloud ❑ Aether hybrid cloud shows two subsystems running in the central cloud: 1) One or more instances of the Mobile Core Control Plane (CP), and 2) The Aether Management Platform (AMP). ❑ Each SD-Core CP controls one or more SD-Core UPs. ❑ How CP instances (running centrally) are paired with UP instances (running at the edges) is a runtime decision, and depends on the degree of isolation the enterprise sites require. Aether runs in a hybrid cloud configuration, with Control Plane of Mobile Core and the Aether Management Platform (AMP) ❑ AMP is responsible for managing all the running in the Central Cloud. centralized and edge subsystems (as introduced in the next section). 76 Summary Cloud Computing was introduced. Network virtualization was introduced. Example Cloud and Edge implementation was presented 77

Use Quizgecko on...
Browser
Browser