CH3.docx
Document Details

Uploaded by VeritableAlgebra
Harrisburg University of Science and Technology
Full Transcript
In the first two chapters of this book, we talked about cloud concepts and characteristics. The material definitely had a technical slant—that’s kind of hard to avoid when you’re talking about networking and data storage—but the implicit assumption was that your business needs cloud services. Let’s...
In the first two chapters of this book, we talked about cloud concepts and characteristics. The material definitely had a technical slant—that’s kind of hard to avoid when you’re talking about networking and data storage—but the implicit assumption was that your business needs cloud services. Let’s challenge that assumption for a minute. Does your business really need the cloud? If you’re not sure, how do you determine the answer? To help figure it out, run a cloud assessment. Cloud assessments can also help in situations where you have cloud services but want to understand if you are paying for too many bells and whistles or maybe you need more. You should run periodic assessments of current services to understand what you have and what your business needs to optimize success. Part of running a cloud assessment is to understand what clouds can do for your business. From Chapters 1 and 2, you already learned that clouds can provide compute, network, storage, and database capacity. Paying less for those services in a cloud rather than in an on-premises data center certainly helps make the business case for cloud usage. But clouds can benefit your business in so many other ways too, from providing streamlined security to enabling big data analytics to empowering efficient digital marketing campaigns. In this chapter, we will start with a discussion of using cloud assessments. The knowledge you gain here can be applied whether or not your business already has cloud services. Then, we will finish by looking at specific benefits and solutions clouds can provide for your business. Armed with this information, you will be able to confidently engage a cloud service provider (CSP) and obtain the services you need. Using Cloud Assessments If your business isn’t using the cloud today, it’s entirely possible that someone high up in the management chain will pop over to your desk one day and declare, “We need the cloud. Make it happen!” More likely, though, your company will be under pressure to cut costs and increase productivity—at the same time, so do more with less—and it will be up to you or a small team of people to figure out how to hit those productivity goals. A popular strategy in the last few years (and of course the reason you’re reading this book) is for companies to move some or all of their computing needs to the cloud. But one doesn’t just snap her fingers and move a corporate network—even pieces of a small one—to the cloud. If you were planning a physical on-premises network, you 80 Chapter 3 Assessing Cloud Needs would spend weeks or months laying out plans, making sure all needs were covered, and going back to revise those plans some more. Planning cloud services should be no different. Improper assessment and planning can result in anything from minor annoyances, such as paying too much, to major catastrophes, like the company being unable to make money. If you search the Internet, you’ll fi nd dozens of different frameworks for cloud assessment and migration. Each major CSP and cloud consulting agency seems to have their own framework, and at fi rst it can be overwhelming. All of the frameworks can basically be broken down into fi ve steps, as illustrated in Figure 3.1. f igure 3.1 Cloud implementation framework Migration Ongoing Management and Assessment Define Future State Baselines Feasibility Gap Analysis Business Technical Key Stakeholders Operations/ Management Gather Assessment Design Implementation Requirements In this section, we will focus mostly on the fi rst two categories: gathering requirements and making assessments. To do that, we’ll take you through seven different types of assessment activities that will help determine your company’s cloud needs. Along the way, we’ll present several scenarios to help illustrate when and how to use various aspects of cloud assessments. In Chapter 4, “Engaging Cloud Vendors,” we will get into more details on the design and implementation steps. Operations and management are covered in Chapter 5, “Management and Technical Operations.” Gathering Current and Future Requirements Going back to a question we asked earlier, does your business even need the cloud? Somehow you’ve arrived at the point where you need to fi gure out the answer to that question. It could have been a directive from management, or perhaps you’re just looking for ways to save the company money or improve productivity. Or, maybe your company already has cloud services, and you need to fi gure out if they’re the best ones for the current business environment. Whether or not you have cloud services, the fi rst question to ask is, “What do we need?” It’s time to gather some requirements. It’s the fi rst step and also the most important one to ensure a good end result. Gathering requirements is easiest when you know the right questions to ask and who to ask them of. We’re going to hold off on the questions to ask for now and focus on who we Using Cloud Assessments 81 need to talk to. Those are the key stakeholders. Then we’ll get into what we need to talk to them about. Key Stakeholders A stakeholder is someone with a vested interest or concern in something. Therefore, a key stakeholder is an important person with a vested interest or concern in something. There may be only one key stakeholder at your company, or there may be more than 100, depending on the size and scope of your organization. If it’s the latter, we feel sorry for you! Taking a systematic approach to identifying and contacting the stakeholders will help you get the input you need for this critical assessment phase. Here’s a list of some people who could be key stakeholders when it comes to cloud requirements: Chief executive officer (CEO) Chief financial officer (CFO) Chief information officer (CIO) Chief technology officer (CTO) Chief information security officer (CISO) Department managers or supervisors Key network and application users There are quite a few people on that list, especially when you start to consider department managers and network users. If you have a smaller company, you might not have more than one or two “C-suite” executives you need to talk to. Even in many big companies, there are not separate CIO, CTO, and CISO roles. One person handles all information technology, though they might have people reporting to them who handle specific aspects such as information security. One of the things you will find out during an assessment like this is that almost everyone in the company thinks that they are a key stakeholder. To help make the key stakeholders list manageable, find the most senior person in the organization to be the approver of the list. Ask that person who they think should have input and then work with those people. If employees who are not on that list think they should have input, then they can work that out separately with the list approver. Another strategy that can help is to have a single point of contact (SPOC) for every department that needs to have input. For instance, maybe you know that the engineering and accounting departments have very different computing needs, but you aren’t an expert in either one. Work with the department manager to identify a SPOC and then have that SPOC gather departmental needs and share the consolidated list with you. It’s far easier to work with one person than to try to work with 20. 82 Chapter 3 Assessing Cloud Needs A Point about Contacts Not only is it easier to work with one person than with 20, it’s a lot easier to keep projects organized and on track if one person is taking the lead. This is why you should use a SPOC—CompTIA Cloud Essentials+ exam objectives refer to it simply as a point of contact (POC)—for multiple aspects of the cloud assessment and migration process. For example, there should be one person designated as the overall leader of the cloud assessment process. This person should be high enough in the organization to make decisions without worry of being overridden by other people. This could be the CEO, CIO, CTO, or this role could be delegated to someone else with decision-making authority. In addition, we already discussed having a SPOC for each department to gather cloud needs for that department. There should also be a SPOC from your company that the cloud provider interacts with. Perhaps there needs to be a technical leader and a business leader, so two points of contact, but those two people then need to be in constant communication about cloud needs and performance. Having a designated POC can also help avoid problems with vendors. For example, some CSPs may be overly aggressive about selling additional services. If they don’t get the sale from the person they’re talking to, they might contact someone else to try to get the sale instead. If departmental employees all know that one person is the SPOC who makes the arrangements with the CSP, they can simply refer the CSP back to the original person and avoid any issues. Identify who the points of contact are, and ensure that everyone involved with the project knows who those people are. Asking the Right Questions Now that you’ve figured out who to talk to, it’s time to define current and future requirements. What benefits does the company want to get from the cloud? From a timeframe standpoint, identifying current requirements is fairly straightforward. What are the company’s needs right now? The future is quite ambiguous, though. How far in the future should you try to look? One month? One year? Five years? The further out you look, the foggier the crystal ball becomes. There’s not one right answer to the question—it really depends on your business, its needs, and normal planning cycles. For example, most companies will depreciate large-scale investments over a three- or five-year window. So, if your company does that, perhaps future requirements should attempt to define what will be needed in three or five years. Or, because your company strictly adheres to a yearly budgeting process, all you need to identify are the needs for the next 12 months. Or maybe the company is expanding so rapidly that the only timeframe Using Cloud Assessments 83 you can realistically forecast is about six months out. You know your company best and can set the appropriate horizon for when the “future” is. Even if you are forced into planning in short-term windows, create a long-term (three- to five-year) technology road map. Review it every year to assess the current situation and changing needs and update it as necessary. Once you’ve established the future timeframe, develop the framework for assessing current and future requirements. Most CSPs have frameworks you can use to help with this process. For example, Amazon has the AWS Cloud Adoption Framework (CAF), which can be found at https://d1.awsstatic.com/whitepapers/aws_cloud_adoption_framework.pdf. The AWS CAF outlines six perspectives you should consider when assessing cloud needs, as shown in Figure 3.2. f igure 3. 2 An adaptation of the AWS CAF Cloud Services Business Value R ealization People Roles and R eadiness Governance Prioritization and Control Platform Applications and Infrastructure Security Risk and C ompliance Operations Management a nd Scale Usually, the business, platform, and security perspectives are top of mind. We’re used to thinking in terms of delivering business results, having the right technology to enable users, and securing data. People, governance, and operations quite often get neglected. Is there 84 Chapter 3 Assessing Cloud Needs staffi ng to handle a cloud transition, and are those people trained to perform those tasks? And once the cloud is in place, who will be responsible for governance and making sure everything is working properly? All of those perspectives should be considered when assessing current and future requirements. Microsoft has its own CAF as well, located at https://docs.microsoft.com/en-us/ azure/architecture/cloud-adoption/. Within this trove of information is a downloadable cloud adoption framework strategy and plan template that you can use to help gather requirements. The document has places to list business priorities, organized by priority, and areas to document the stakeholders, desired outcome, business drivers, key performance indicators (KPIs), and necessary capabilities. An example from that document is shown in Figure 3.3. f igure 3. 3 Microsoft’s cloud adoption framework strategy and plan template KPIs are success metrics for the project. They are often financial (save X amount of dollars) or technical (speed up development time by 20 percent). They should be specific and measurable and directly reflective of the success of that project. To summarize, when determining current and future business requirements, get clear on who the key stakeholders are and when the “future” is. Then, create a systemic approach to gathering those requirements. Don’t feel like you need to reinvent the wheel for that approach—plenty of frameworks exist online and can be adapted to meet your business needs. We referenced CAFs from Amazon and Microsoft, which may imply we are endorsing their products. We’re not recommending any specific cloud provider, but using these as examples from major CSPs. There are dozens of third-party, vendor-agnostic assessment tools available on the Internet— find the one that works for you. Using Cloud Assessments 85 Using Baselines Your company is currently managing an on-premises network, using cloud services, or running some combination of the two. How well is the current environment performing? While you might hear feedback from disgruntled employees—“the network is terrible!”— quantitative data is likely going to be more helpful. To understand current performance, you need to run a baseline. A baseline is a test that captures performance data for a system. For example, network administrators often establish baselines for a server’s central processing unit (CPU) usage, random access memory (RAM) usage, network throughput, and hard disk data transfer speed and storage utilization. Determining what to baseline is important, but when to run the baseline is also critical. For example, Amazon’s web servers are likely to be busier during Cyber Monday than they are on some random Wednesday in May. Similarly, a network’s security servers might have more latency (delay) around 8 a.m. when everyone is logging in versus 7:30 p.m. Running baselines at different times during the day is helpful to understand what normal performance and peak performance look like. Sometimes when getting baselines, you don’t necessarily know if the reading is good or bad. For example, let’s say your network administrator tells you that the server’s CPU is, on average, utilized 30 percent of the time. Is that good or bad? (Generally speaking, it’s pretty good.) In isolation, that reading is just a number. If the server seems to be responding to user requests in a timely fashion, then things are fine. But if a year later the same server has an average CPU utilization of 70 percent, then that 30 percent reading gives you important context. What changed? Are users complaining about response times? Are other services impacted? If so, then an intervention is warranted. If not, then things are probably still fine, but more frequent monitoring of the CPU might be needed. In the upcoming “Understanding Benchmarks” section, we’ll talk about the usage of benchmarks as context for baseline reads. From a cloud assessment perspective, the baseline will give you an indicator of current performance. It will help determine how many cloud resources are needed. If your onpremises network is out of storage space, then more cloud storage space is most likely warranted. If local servers’ CPUs are sitting idle, then perhaps fewer virtual servers are needed. The baseline will also be helpful to have at some point in the future, when someone in management asks how much better the network is after you migrated to the cloud. So, fi rst understand current and future requirements for your company’s information technology needs. Then, make sure you have baselines for current performance. 86 Chapter 3 Assessing Cloud Needs Running a Feasibility Study The fi rst step in cloud assessment is to understand current and future requirements. Once the future requirements are understood, there are a few new questions to ask. Do the future requirements require the cloud? Does the company have the ability to migrate to the cloud? These questions should be part of a feasibility study , which is used to determine the practicality of the proposed move to the cloud. In other words, we’re back to a question we’ve already asked a few times, which is “Do we need the cloud?” And if so, can we make it happen? Not all companies need the cloud, and not all IT services should be migrated to a cloud. The fi rst part of the feasibility study should be to determine which services, if any, should be moved. It’s better to fi gure out that you don’t need the cloud before you sign a contract and begin to migrate! Most CSPs will be happy to help you run a feasibility study for free. Unless your company has internal cloud experts to do it, outsourcing it is a good way to go. A feasibility study will generally help determine the following things: Which capabilities can and should be offloaded to the cloud The level of availability your company needs (e.g., three nines, four nines, etc.) Compliance, security, and privacy guidelines Support services needed, either internal or from the CSP A migration path to the cloud Keep in mind that a cloud migration is not an all-or-nothing solution. For example, your company might decide to migrate only its email services to the cloud and adopt Gmail. Or, maybe your company needs extra storage space for data archives, so a cloud-based backup solution could save the company money. After the feasibility study, you should have a good idea if a cloud solution makes sense for your company and what might be required to complete a migration. Conducting a Gap Analysis It’s quite likely that your company’s current and future requirements for IT resources are different. The difference between where you are now and where you want to be is a gap. And a gap analysis can help identify all of the areas where gaps exist. Sometimes you will hear a gap analysis referred to as a needs analysis or a needs gap analysis. Oftentimes, the output of a feasibility study will include data that can be used for a gap analysis. You can add information to that report or create your own system to identify and track progress toward closing the gaps. There’s no one offi cial gap analysis or way to do Using Cloud Assessments 87 it—it’s really a matter of being organized and tracking where you are and where you need to be. Figure 3.4 shows a simple template you could use for a gap analysis. Figure 3.4 Gap analysis template Category Current State Goal Action Needed Priority Owner Due Date Business People Governance Platform Security Operations As you can see in Figure 3.4, we structured the categories based on the six areas of focus from the AWS CAF we discussed earlier in this chapter. Your business might or might not have gaps in all of these areas, and the template can be modified accordingly. In addition, there may be multiple gaps within one area. For example, let’s say that under Platform, you have gaps in hardware needs and up-to-date applications. Those are gaps that require different solutions and should be treated separately. One of the nice features about the template in Figure 3.4 is that it covers both business and technical gaps. One isn’t necessarily more important than the other, but significant gaps in either area can cause a potential cloud solution to under-deliver versus expectations. Since you will be focusing on a cloud solution that is clearly rooted in technology, it can be easy to focus there and forget about the business or people side of things. It’s just as critical to have the right staff in place with the proper training as it is to have the right number of virtual machines (VMs) in the cloud. Gap analyses can also help with the following: Prioritizing the allocation of resources, such as where to assign technical staff or spend budget Identifying which technical features or functions have been left out of the migration plan Determining if there are compatibility issues between any components in the migration plan Identifying policies or regulations that are not being met with the current migration plan The gap analysis itself doesn’t provide a plan for remediation. It’s just a road map surfacing the issues so they can be worked on. Some gaps may be showstoppers, while others are minor inconveniences. Having all of them laid out in one place can help the team and decision-makers understand what it will take to get from point A to point B and then decide if it’s worth the investment in time and money. Once the gap analysis document is created, it’s a great utility to help measure progress against goals. It can tell you how close you are (or not) to achieving all of the goals set out 88 Chapter 3 Assessing Cloud Needs in it, and if additional resources are needed to expedite the process. Much like with feasibility studies, CSPs will be happy to help with gap analyses with the goal of earning your business. Using Reporting Earlier in this chapter, we talked about the importance of running baseline reads to understand system performance. We also talked about running baselines over time. It’s a good idea to put all of the baseline reads into once place and have some system for reporting the data. That is, share it with the people who need to know. It could be the network administration team, senior leaders, or a combination of stakeholders. There are three key cloud performance metrics to baseline and report on. Compute Network Storage Getting into the specifi c performance metrics to look at is beyond the scope of this book. Just know that performance in those three areas can make or break your network experience. Monitoring memory (RAM) usage is also important in real life. In the context of the cloud, memory is considered a compute resource. When you provision a virtual computer, you’ll choose how many CPU cores and how much RAM is allocated to it. Because RAM is a compute resource, you won’t see it explicitly listed as one of the reporting metrics in the CompTIA Cloud Essentials+ exam objectives. If you have cloud services already, the CSP will provide usage reports that you can view and analyze. The usage reports will also be used to bill your company for services. We discuss that more in Chapter 5. So, how does reporting fi t into cloud assessments? If you don’t have cloud services, then you can use whatever reports you have to help you determine the amount of cloud resources you need. For example, if your network has four servers and they’re all pretty much at capacity, then you might want to invest in fi ve or even six virtual servers. If you do have cloud services, then the reports can help you understand resource utilization. Based on that, you can recommend scaling up or down the number of resources you’re paying for as appropriate. Understanding Benchmarks Earlier in the “Using Baselines” section, we talked about the importance of understanding system performance. We also mentioned that baselines without context can be hard to interpret. That’s where benchmarks come into play. A baseline is a read of performance, whereas a benchmark is a standard or point of reference for comparison. Using Cloud Assessments 89 Let’s go back to the example we used earlier of a server that has 30 percent CPU usage. If you know that the standard benchmark is to be at 70 percent or lower utilization, then the 30 percent reading looks really good. That’s important context. Now, if users are complaining about the server being slow, you and the network administrators know that it’s probably not the CPU’s fault. You might hear the terms baseline , benchmark , and reporting (as in performance reporting or resource reporting ) thrown around interchangeably. For example, some CSPs might show you tools they call benchmarking tools, but what they really do is baseline gathering and reporting, with no valid point of comparison. For colloquial purposes the mixed use of terms is fine, but for the Cloud Essentials+ exam you need to know the differences. Within the cloud space, if you are assessing potential CSPs, there are few key areas to benchmark: Availability Response time Incident resolution time Availability is important and should be specifi ed in the service level agreement (SLA). The more nines you get, the more expensive it is—and you need to ensure that you’re getting what you pay for. When performing a cloud assessment and evaluating a CSP, understand what their typical availability is for clients who pay for a certain level of service. Response time is another key factor. Because cloud services are accessed over the Internet, there’s always going to be a bit of latency. The question is how much latency is acceptable. You will probably have different expectations of acceptability depending on the use case. For example, a customer service or sales database needs to be very responsive. Pulling an email out of an archive created eight years ago and now in cold storage will take a little longer, and that should be okay. Finally, there is incident resolution time. Clouds have a lot of great features and are generally reliable, but things happen. Hardware failures, security breaches, and other problems arise. How quickly the CSP typically responds to those incidents should be known before entering into any agreement. Most CSPs will have benchmarking tools available for use. For example, AWS has EC2 Reports, and Azure has a suite of tools as well. If you don’t happen to totally trust the CSP, third-party benchmarking tools are available too. For example, CloudHarmony ( https://cloudharmony.com ) provides objective performance analyses, iPerf ( https://iperf.fr ) can give you network bandwidth measurement, and Geekbench ( https://geekbench.com ) can measure cloud server performance. Creating Documentation and Diagrams Documentation always seems to get overlooked. The process of creating documentation can seem tedious, and some people wonder what value it adds. There are, of course, people who love creating documentation—those are the ones you want in charge of the process! 90 Chapter 3 Assessing Cloud Needs When performing a cloud assessment, and later a migration, documentation and diagrams are critical. We’ve already looked at documentation used for a gap analysis and creating reports and baselines for system performance. All of that should be kept in a central location, easily accessible to those who need it. In addition, consider documentation and diagrams for the following: Conversations with stakeholders and team members Location of all resources and applications pre- and post-migration Owners of capabilities and processes Internal and external points of contact Many teams can benefi t from using a collaboration tool to track and store documentation. For example, Microsoft Teams ( https://products.office.com/en-us/ microsoft-teams/group-chat-software ) is a cloud-based platform integrated into Offi ce 365 that allows for fi le storage, collaboration, chat, application integration, and video meetings. Slack ( https://slack.com ) is a popular alternative, and there are dozens more available such as Flock ( https://flock.com ), HeySpace ( https://hey.space ), and Winio ( https://winio.io ). As a reminder, for the Cloud Essentials+ exam, you need to know how to use the appropriate cloud assessments, given a scenario. Be familiar with all of the following: Current and future requirements Baseline Feasibility study Gap analysis Business Technical Reporting Compute Network Storage Benchmarks Documentation and diagrams Key stakeholders Point of contact Understanding Cloud Services 91 Understanding Cloud Services The cloud has fundamentally changed the landscape of computer networking and how we use technology. That’s a bold statement, but it’s not an exaggeration. If you look back 10 or 20 years, companies that wanted a network had to buy hardware for the network, servers, and client machines as well as needed software packages. The internal or contracted support team had to install, test, and configure all of it. Any needed changes took time to test and implement. And of course, hardware and software became out of date, so companies would have to write off or depreciate their assets and buy new ones to stay with the times. Now, all of this can be done virtually. A server still does what a server did back then, but the definition of the server itself may have completely changed. Instead of one collection of super-powerful hardware, there could be dozens of servers using the same hardware, and each can be turned on or off nearly instantly. Software is rarely purchased in perpetuity anymore but rather rented on a subscription basis. And while the support team is still important, a lot of technical expertise can be outsourced. In the majority of cases, the new model is also cheaper than the old one! As cloud technology has matured, CSPs have also been under pressure to expand their business models. Many started to take services that are common in an on-premises network and make them available via the cloud. Others dreamed up new ways of doing business that could help their clients gain efficiencies and lower costs. In this section, we are going to examine 11 different services that CSPs offer. Some of them exist thanks to the cloud, whereas others are simply made better thanks to cloud technology. Keep in mind that the cloud has been popular for business use for only about a decade now. CSPs have likely just scratched the surface on the types of offerings they can make, and more will be on the way. It’s just the natural cycle of things when the landscape is being fundamentally changed. Identity Access Management You probably don’t need us to lecture you on the importance of network security. The last thing you need is for your company to be in the news headlines for a security breach or for hackers stealing plans for a prototype before you can go to market. Clearly, security was important before the cloud existed. Digital assets need to be protected, and the way it’s done is to verify that the person trying to log in and access those resources is who they say they are. This is identity access management. A user is assigned a user account and told to create a password. When the user tries to log in, those credentials are presented to an authentication server that either grants or denies access. The same thing happens for individual resources once a user is logged in. If a user tries to access a file or other resource, their identity is compared to a security list to see if they have the appropriate level of permissions. 92 Chapter 3 Assessing Cloud Needs When you’re using the cloud, though, all information—including login credentials such as usernames and passwords—is being transmitted across the Internet. Fortunately, it is generally encrypted, but having the Internet in play introduces an extra element of risk. So we’ll spare the lecture for now and instead focus on three methods to simplify or strengthen identity access management in the cloud: multifactor authentication, single sign-on, and federation. Multifactor Authentication The simplest form of authentication is single-factor authentication. A single-factor system requires only one piece of information beyond the username to allow access. Most often, this is a password. Single-factor authentication is quite common, but it’s probably the least secure of the authentication systems available. It’s better than no authentication at all, though! To increase security, your network or cloud services might require multifactor authentication (MFA), which as the name implies requires multiple pieces of information for you to log in. Generally speaking, in addition to a username, MFA requires you to provide two or more pieces of information out of the following categories: Something you know Something you have Somewhere you are Something you are Something you know is generally a password. If you forget your password, a website might ask you to provide answers to security questions that you selected when you registered. These are questions such as the name of your elementary school, father’s middle name, street you grew up on, first car, favorite food or musical artist, and so forth. One-time passwords can be generated by sites to give you a limited time window to log in. These are far more secure than a standard password because they are valid for only a short amount of time, usually 30 minutes or less. The password will be sent to you via push notification such as text, email, or phone call. Something you have can be one of a few different things, such as a smart card or a security token. A smart card is a plastic card, similar in dimensions to a credit card, which contains a microchip that a card reader can scan, such as on a security system. Smart cards often double as employee badges, enabling employees to access employee-only areas of a building or to use elevators that go to restricted areas, or as credit cards. Smart cards can also be used to allow or prevent computer access. For example, a PC may have a card reader on it through which the employee has to swipe the card or that reads the card’s chip automatically when the card comes into its vicinity. Or, they’re combined with a PIN or used as an add-on to a standard login system to give an additional layer of security verification. For someone to gain unauthorized access, they have to know a user’s ID and password (or PIN) and also steal their smart card. That makes it much more difficult to be a thief! Understanding Cloud Services 93 A security token, like the one shown in Figure 3.5, displays an access code that changes about every 30 seconds. When received, it’s synchronized with your user account, and the algorithm that controls the code change is known by the token as well as your authentication system. When you log in, you need your username and password, along with the code on the token. Figure 3.5 RSA SecurID Security tokens can be software-based as well. A token may be embedded in a security file unique to your computer, or your network may use a program that generates a security token much as the hardware token does. Figure 3.6 shows an example of PingID, which works on computers and mobile devices. This type of token saves you from having to carry around yet another gadget. Figure 3.6 PingID Somewhere you are, or the location you are logging in from, can also be a security factor. For example, perhaps users are allowed to access the company’s private cloud only if they are on the internal corporate network. Or, maybe you are allowed to connect from your home office. In that case, the security system would know a range of IP addresses to 94 Chapter 3 Assessing Cloud Needs allow in based on the block of addresses allocated to your internet service provider (ISP). Sometimes you will hear the terms geolocation or geofencing associated with this security mechanism. Finally, the system could require something you are—that is, a characteristic that is totally unique to you—to enable authentication. These characteristics are usually assessed via biometric devices, which authenticate users by scanning for one or more physical traits. Some common types include fi ngerprint recognition, facial recognition, and retina scanning. It’s pretty common today for users to log into their smartphones enabled with fi ngerprint or facial recognition, for example. Single Sign-On One of the big problems that larger networks must deal with is the need for users to access multiple systems or applications. This may require a user to remember multiple accounts and passwords. In the cloud environment, this can include some resources or apps that are on a local network and others that are cloud-based. The purpose of single sign-on (SSO) is to give users access to all of the systems, resources, and apps they need with one initial login. This is becoming a reality in many network environments, especially cloud-based ones. SSO is both a blessing and a curse. It’s a blessing in that once the user is authenticated, they can access all the resources on the network with less inconvenience. Another blessing is that passwords are synchronized between systems. When it’s changed in one system, the change replicates to other linked systems. It’s a curse in that it removes potential security doors that otherwise exist between the user and various resources. While SSO is not the opposite of MFA, they are often mistakenly thought of that way. You will hear the terms one-, two-, and three-factor authentication, which refers to the number of items a user must supply to authenticate. After authentication is done, then SSO can take effect, granting users access to multiple types of resources while they are logged in. A defi ning characteristic of SSO is that it only applies to resources contained within one organization’s security domain. In other words, if Corporation X has Linux servers, Windows servers, apps, and fi les that are all under the Corp X security umbrella, then SSO can grant users access to all of those resources. What SSO can’t do, however, is cross organizational boundaries. So if Corp X wants their users to access local resources as well as a cloud-based app such as Microsoft Offi ce 365, SSO can’t provide the login to both. In that case, federation is needed. Federation Let’s continue with the example we just introduced: your company wants users to be able to access company-secured resources as well as the cloud-based Microsoft Offi ce 365 with one login. Is that possible? The answer is yes, but it’s not through SSO. The technology that enables this is called federation , also known as federated identity management (FIM). Understanding Cloud Services 95 In simplest terms, federation is SSO across organizations or security domains. For it to work, the authentication systems from each organization need to trust each other— they’re federated. Authorization messages are passed back and forth between the two systems using secured messages. One example of federation implementation is via Security Assertion Markup Language, or SAML. (It’s pronounced like the word sample without the “p.”) There are others as well, but SAML is very popular. Let’s take a look at how the federated login process works, continuing with our example: 1. User jdoe provides a username and password to log into the Corp X corporate network. Corp X’s security servers authenticate jdoe. 2. User jdoe tries to open Microsoft Office 365 in the cloud. 3. Office 365 sends a SAML message to Corp X’s security servers, asking if jdoe is authenticated. 4. Corp X’s security servers send a SAML message back to Microsoft’s security servers, saying that jdoe is legitimate. 5. User jdoe can use Office 365. Again, it’s like SSO, just across security domains. All the user knows is they type in their username or password once, and they can get to the resources they need. The same example holds true if the Corp X network is cloud-based, using AWS, for example. Finally, although we’ve been speaking of federation in one-to-one terms, know that an organization can be a member of multiple federations at the same time. The primary benefit to using federation is convenience, but there is a financial upside as well. There is less administrative overhead required in setting up and maintaining security accounts, because the user’s account needs to be in only one security domain. There are two primary disadvantages. First, there is a bit of administrative overhead and cost associated with setting up a federated system. Second, you do really need to trust the other federated partners and be sure their security policies match up to your own. For example, if your company requires government-level security clearances but another organization does not, they might not be a good federation partner. Cloud-Native Applications One of the big ways that the cloud has changed computing is in the application development space. The cloud has enabled more agile programming techniques, which results in faster development, easier troubleshooting, and lower cost to software companies. Two technologies that enable agile cloud-native applications—that is, apps that wouldn’t exist outside the cloud—are microservices and containerization. Microservices For most of the history of application development, all executable parts of the application were contained within the same bundle of code. You could think of an application as one file, with anywhere from thousands to millions of lines of code. The key was, everything was in one bundle. Today we’d call that a monolith. 96 Chapter 3 Assessing Cloud Needs There was nothing inherently wrong with the monolithic development approach—it’s just how programs were built. Yes, there were minor inconveniences. One was that to load the program, the entire program needed to be loaded into memory. This could be a serious drain on a computer’s RAM, depending on how big the program was. And, if there was a bug in the program, it might have been a little harder for the development team to track it down. But again, there was nothing really wrong with the monolithic approach and it served us well for decades. The advent of the cloud allowed for a new approach to software development, one where applications could be broken down into smaller components that work together to form an application. This is called microservices—literally meaning small services. You will also hear it referred to as microservice architecture because it is both a software development approach and an architecture. Either way you look at it, the point is that it’s all about breaking apps into smaller components. Figure 3.7 shows a comparison of the monolithic approach to a microservices approach. Figure 3.7 Monolithic vs. microservices app App App User Interface Business Logic/ Abstraction Layer Data Access Layer User Interface Microservice Microservice Microservice Monolithic Microservices As an example of how this works, think of a shopping trip to Amazon.com. Amazon doesn’t have just one big monolithic web app. The search feature could be its own microservice, as could the query that gives you additional product recommendations beyond your initial search. The “add to cart” feature could also be a microservice, as well as the part that charges a credit card, and finally the unseen portion of the app that transmits your order to the warehouse for order fulfillment. (To be fair, we don’t know exactly how the Amazon site is set up, but it could be set up this way, so it’s a good illustrative example.) Understanding Cloud Services 97 Microservices also are handy for letting multiple apps use the same service. For example, say that your company has a product database. The sales team needs to be able to query the database to check prices and make sure the product is in stock. The operations team, using a completely different app, also needs to check stock levels to ensure they reorder from the supplier when needed. The apps are different, but the mechanism to query the database for how much product is available could be the same microservice. Using microservices ultimately results in faster app development and more fl exible apps at a lower cost. It also makes apps easier to update and troubleshoot for the development teams. Finally, different parts of the app can be hosted or executed on different servers, adding to the fl exibility of your total network. Containerization A container is simply a place to store items in an organized manner. Containerization is just a fancy term saying that items are being placed into a container. In terms of the cloud and software development, it means that all components needed to execute a microservice are stored in the same package or container. The container can easily be moved from server to server without impacting app functionality and can also be used by multiple apps if needed. Think of microservices as the way to build apps, and containerization as the way to store all of the microservice components in the same place. Containers are often compared to VMs, which you learned about in Chapter 1, “Cloud Principles and Design.” This is because both allow you to package items (in this case, app components; in a VM’s case, OS components) and isolate them from other apps or OSs. And, there can be multiple containers or VMs on one physical set of hardware. If a developer were creating a new app that needed features that were already containerized, his job would be much faster. Instead of re-creating those features, he could use the containers he needed and enable them to talk to each other through an API. He would just need to code the APIs and any new, unique aspects of the program rather than the entire thing. Much like microservices, containerization makes app development faster and cheaper. Data Analytics Many companies have realized, or are realizing now, that they are sitting on a ton of data regarding their customers or markets. They’re just not sure how to make the best use of it. That’s why the fi eld of data analytics is hot right now. People with analytical skills are in high demand as companies attempt to fi nd whatever edge they can with the troves of data they possess. 98 Chapter 3 Assessing Cloud Needs Most of the time, these companies are looking for analysts with experience in big data. We introduced the concept of big data in Chapter 2, “Cloud Networking and Storage.” Big data doesn’t necessarily mean a large data set—the data set could be large but doesn’t have to be—but rather it refers to unstructured data. Unstructured data doesn’t fit neatly into the rows and columns of a spreadsheet or database table, and it’s hard to search or infer patterns from. Great examples are pictures and videos. There are many tools in the marketplace designed to help companies manage and analyze big data. The cloud, with its flexibility and ample resources, has made this easier. Two technologies within data analytics that get a lot of love today, particularly in the realm of big data analytics, are artificial intelligence and machine learning. Artificial Intelligence For decades—maybe even centuries—science fiction has been filled with tales of machines or robots that look and act like humans. Depending on the story, this can be a good or bad thing. Most of the time, though, it seems that the human-like robot gets corrupted somehow and does evil deeds (or maybe it’s just programmed that way), and nice, real humans need to come save our species. This book isn’t a feel-good story about the indomitable human spirit, though—we’re here to focus on the intelligent machines. In essence, artificial intelligence (AI) is the concept of making machines perform smart, human-like decisions. Underpinning this is the fact that the machine needs to be programmed to do the task at hand. For example, you could have a robot that picks up and sorts marbles into bins by color. A properly programmed machine can do this tedious task faster and more accurately than a human can. AI systems can be classified as applied or general. Most AIs fall into the applied category, meaning that they are designed to perform a certain task or set of tasks. A general AI, in theory, could complete any task set in front of it. This is a lot harder to accomplish, but it’s the category into which machine learning falls. We’ll talk more about machine learning in the next section. We already used an example of AI performing a menial sorting task, but it’s used in a variety of settings, including the following: Virtual Personal Assistants Many of us use Siri, Alexa, or Google Now to perform tasks for us. They can check the weather, play music, turn lights on and off, text or call people, set alarms, order products, and do a variety of other tasks. Video Games Anyone who has played a game with an in-game, computer-controlled character knows what AI is. Ask anyone who gamed a few decades ago and they will tell you how terrible, predictable, and honestly unintelligent the AI was. Today they’ve gotten much better at reacting to the human-controlled character’s actions and the environment. Digital Marketing We have an entire “Digital Marketing” section later in this chapter, but AI plays an important role. Products or advertisements can be tailored to your Internet search history. Coupons may become available based on your purchase habits. These are both directed by AI. Understanding Cloud Services 99 The Financial Industry Computers can be programmed to detect transactions that appear abnormal, perhaps because of the frequency or size of transactions that normally occur. This can help identify and protect against fraud. The Transportation Industry Smart cars are being tested today and might be within only a few years of becoming commonplace. In addition, tests have been conducted on smart semis, carrying freight from one end of the country to the other. AI enables these capabilities. You can see that some of the actions that can be taken by AI are pretty complex. Driving is not easy, and financial fraud can be very hard to detect. Regardless of how complex the activity is, the key to AI is that it has to be programmed. The AI can only react to conditions it has been coded to react to, and only in the way it has been told to react. For example, if our marble sorting machine knows how to sort blue and red marbles only, it won’t know what to do with a green one that it encounters. Machine Learning The terms artificial intelligence and machine learning (ML) often get confused with each other, but they’re not the same thing. With AI, the machine needs to be programmed to respond to stimuli to complete a task. With machine learning (ML), the idea is that data can be fed to a machine and it will learn from it and adapt. ML is a specialized form of AI. ML “learns” through the use of a neural network, which is a computer system that takes input and classifies it based on preset categories. Then, using probabilities, it tries to make a decision or prediction based on new input it receives. A feedback loop to tell the system whether it was right or wrong helps it learn further and modify its approach. For example, let’s say you want to create an ML program that identifies fans of your favorite sports team. To help it learn, you feed it several images of fans in the right apparel or showing the correct logo. Then you would feed it new stimuli, and the program would guess if it’s an image of that team’s fan or not. You tell it whether it got it right or wrong. That process of feeding it new stimuli, it guessing, and you providing feedback gets repeated several times. Eventually, the system gets better at recognizing the correct elements in the photos and increases its accuracy. The applications that AI and ML are used for are usually very different. AI needs to be programmed for all potential inputs and outcomes to work properly. ML can learn from new inputs whether it’s correct or not. Here are some examples of where ML can be applied: Image recognition, including facial recognition Speech and audio recognition. For example, an ML program can hear a new song and determine whether the song is likely to make people happy or sad. Medical diagnoses. ML can take patterns of medical data and make a diagnosis of a disease. Financial modeling. An ML program can take inputs and predict what a stock or the market will do. 100 Chapter 3 Assessing Cloud Needs General prediction models. We’ve used a few examples already (such as the last three), but ML is great at making predictions based on a set of rules when given new input. Data extraction. ML can take unstructured big data, such as pictures, videos, web pages, and emails, and look for patterns, extracting structured data. The list here only scratches the surface of what ML can do. All of the major CSPs offer AI and ML packages, such as Google Machine Learning Engine, Azure Machine Learning Studio, and AWS Machine Learning. Other companies offer competing products as well; with an Internet search, you’re sure to find one that meet your needs. Digital Marketing There are few guarantees in life besides taxes, death, and receiving junk mail. While it may be annoying to get emails that we didn’t ask for, occasionally one of them catches our eye and we check it out—after verifying that the company is legit and it’s not a phishing scam, of course. So even if you find them annoying, they do work! Emails from companies selling products or services is one example of digital marketing, that is, the marketing of products or services using digital technology. Contrast this with other types of marketing, such as commercials on the radio or television, billboards, or paper mail. The CompTIA Cloud Essentials+ exam objectives specify two types of digital marketing: email campaigns and social media. Here’s a quick description of each of them: Email Campaigns Email campaigns are when companies send automated emails out to potential customers. Most of the time, the email will contain a link for people to click, which will take them to a company web page or a specific product page. Social Media Social media refers to platforms such as YouTube, Facebook, Instagram, Twitter, Pinterest, and others where people post comments, pictures, and videos and digitally interact with other people. There are two ways advertising is usually done on social media. In the first, the social media marketing campaign will create a post in the same format as a “normal” post on the platform, in an effort to look like the message was posted by a friend. Like an email campaign, this post will have somewhere to tap or click to take you to a product page. Figure 3.8 shows a Facebook ad and a Twitter ad that look like posts. The second is used in formats that support video. Unskippable video ads can be inserted before (called pre-roll) or in the middle of (mid-roll) content the user is viewing. Even though email and social media are different types of platforms, the benefits of each and the tools and methods used to activate them are quite similar, so we’ll treat them together as digital marketing. Here are some of the benefits of using digital marketing: Better customer engagement, e.g., response rates Real-time results, monitoring, and optimization Enhanced analytics Campaign automation and integration Lower costs Understanding Cloud Services 101 Figure 3.8 Facebook (left) and Twitter (right) ads Let’s take a look at each of these in more detail: Better Customer Engagement Social media companies have a frightening amount of data about their users. Some people are aware of this, so it’s not uncommon to hear others say they don’t do social media because of it. That’s fine, but everyone uses a search engine, and Google (or whoever) stores and uses that information too. It’s almost impossible to be truly anonymous on the Internet. While it might be frightening from a user standpoint, all of that data is a gold mine for the digital marketer. The data that Google, Facebook, and others have can be used to create target audiences that meet a profile of customers your company wants to talk to. For example, say that your company is making organic baby food. You can have Google or Facebook create a target audience of women who have an interest in healthy eating and have children under the age of six. You can add more criteria as well, but the more you add, the smaller the target audience gets and the more expensive it becomes to reach people. 102 Chapter 3 Assessing Cloud Needs The Googles and Facebooks of the world will also let you use your fi rst-party data to create target audiences. You can give them a list of email addresses, and they will match them up to people known in their database. They’ll parse common characteristics and target people like the ones you’ve already provided. All of this leads to better customer engagement, because you are targeting people who should be interested in your product. It’s far more effi cient than sending out glossy postcards to everyone in a postal code and hoping for a response. All good digital marketing campaigns have a call to action (CTA), asking potential customers to tap or click somewhere to further interact with the company’s materials or buy a product or service. When potential customers do tap or click, their profile information is tracked via cookies to help the marketers further optimize target audiences and messaging. Real-Time Results, Monitoring, and Optimization After you’ve kicked off a digital marketing campaign, you will want to analyze the performance to see how it did. Most platforms allow you to monitor results in real time, literally viewing how many clicks the ad gets by the minute. Let’s say you pick two different target audiences and show them your ad. You can see if one is performing better than the other and then increase or decrease your investment as appropriate. Or, you could take one audience group and split it into two and show each group a different message. Then, you can see which message performs better and optimize as appropriate. Some larger digital marketers will change messages and audiences daily, depending on how well their placements are performing. Enhanced Analytics Enhanced analytics goes hand in hand with results monitoring and optimization. Digital marketing platforms such as Google, Facebook, and others will have built-in analytics dashboards. You will be able to set your success measures—also known as KPIs—and measure progress against them. For example, you might want to look at click-through rate, video completion rate, or engagements (likes, comments, and shares). Campaign Automation and Integration Another beautiful feature of digital marketing is the ability to automate campaigns and integrate across platforms. Automation includes simple tasks such as turning ads on at a specifi c time or automatically sending emails to people who have clicked on your ad. Integration across platforms is handy to ensure that you’re sending the same message to all potential customers. Lower Costs Digital marketing campaigns can be completed at a fraction of the cost of traditional marketing methods. For example, if you had to send a mailing out to people, you need to pay for the creation of the mail (either a letter or a postcard) as well as postage. And while you might be able to target an audience by sending it to people for whom you have addresses, the targeting capabilities are nothing like they are in the digital world. Digital marketing existed before the cloud was popular, but cloud services have defi - nitely made digital marketing faster, more accurate, and less expensive. We’ve mentioned a few digital marketing platforms, and certainly Google and Facebook are huge. Traditional Understanding Cloud Services 103 cloud providers AWS and Azure have digital marketing services too, as do many others such as Salesforce, Nielsen, Oracle, and Teradata. Autonomous Environments Over the last decade or so, Google, Tesla, and Uber have been among the big companies making noise by testing self-driving cars. Some of the press has been good, but there have been bumps in the road as well. Embark, Peloton, and TuSimple have been experimenting with self-driving semi-trucks as well. In the computer fi eld, Oracle has been working on automating routine database tasks such as backups, updates, security, and performance tuning. Many manufacturing facilities have nearly eliminated the need for human employees. All of these are examples of autonomous environments —a computerized environment where complex human actions are mimicked but without human intervention. You can debate whether autonomous environments are a good thing, but what’s not debatable is that it’s a growing trend. Autonomous environments such as self-driving vehicles work by using a combination of programming, cameras, sensors, and machine learning. Think about driving a car, and all of the inputs that a person needs to be aware of. There’s your speed, the speed of cars around you, the direction they are moving, pedestrians, stop lights, lane merges, weather, and probably 100 other things to pay attention to. It might feel like there is some ambiguity about what defines AI versus an autonomous environment, and there is. Autonomous environments do use AI but are generally considered to be a more complex environment that likely combines several AI components into a bigger system. All of this input generates a huge amount of data. A 2018 study by Accenture (“Autonomous Vehicles: The Race Is On”) estimated that a self-driving car’s cameras and sensors generate between 4 and 10 terabytes of data per day. The average laptop’s fast, solidstate hard drive holds about 500 megabytes—it would take about 20 of these to hold 10 terabytes. That’s a lot of data. Without the cloud, these ambitious projects wouldn’t work. it’s Not just the data storage Endpoint devices within an autonomous environment (such as the car) need to be able to communicate with central planning and management devices. Current wireless cellular transmission standards are capable of supporting the bandwidth on a limited basis, but not on a massive scale. It’s estimated that self-driving cars need to transmit about 3,000 times as much data as the average smartphone user. Wireless infrastructure will need massive upgrades before it can support widespread use of self-driving cars. Using cloud services, your company may be able to save costs by automating certain processes. Of course, all of the usual suspects—big CSPs—will be happy to help you with your automation needs. 104 Chapter 3 Assessing Cloud Needs Internet of Things The Internet of Things (IoT) is the network of devices that are able to communicate with each other and exchange data, and it’s a very hot topic in computing today. It wouldn’t be possible without the cloud. The term things is rather loosely defined. A thing can be a hardware device, software, data, or even a service. It can be something like a jet engine, or it can be multiple sensors on the fan, motor, and cooling systems of that engine. The key is that these things are able to collect data and transmit it to other things. In this section on the IoT, we’ll start with places in which it’s being used today and finish with potential concerns that your company may need to watch out for. Consumer Uses Thus far, most IoT-enabled devices have been produced for consumer use. The concept seems pretty useful—multiple devices in your home are IoT-enabled, and you control them through a centralized hub, regardless of where you are. Did you go to work and forget to turn the lights off? No problem. You can do it from your phone. Did someone just ring your doorbell? See who it is and talk to them in real time through your phone as well. Let’s take a look at a few specific examples: Home Entertainment Systems Home entertainment is definitely the largest segment of IoT devices on the market today. Most new high-definition televisions are smart TVs with builtin wireless networking and the ability to connect to streaming services such as Netflix, Hulu, YouTube, or fuboTV. Other audio and video equipment will often have built-in networking and remote-control capabilities as well. Not all smart TVs are really that smart, and large televisions are definitely not portable. For people who want to take their entertainment with them, there are streaming media devices. Examples include Roku, Amazon Fire, Apple TV, Chromecast, and NVIDIA SHIELD. Some are small boxes, like the slightly old-school Roku 3 shown in Figure 3.9, while others look like flash drives and have a remote control. Each has different strengths. For example, Amazon Prime customers will get a lot of free content from the Amazon Fire, and NVIDIA SHIELD has the best gaming package. Figure 3.9 Roku 3 Understanding Cloud Services 105 Security Systems While home entertainment is currently the biggest market for smart home devices, smart security systems come next. Some systems require professional installation, whereas others are geared toward the do-it-yourself market. Many require a monitoring contract with a security company. Popular names include Vivint, Adobe Home Security, ADT Pulse, and SimpliSafe. While specifi c components will vary, most systems have a combination of a doorbell camera and other IP cameras, motion sensors, gas and smoke detectors, door lock, garage door opener, lighting controls, a centralized management hub, and touchscreen panels. Of course, you are able to control them from your phone as well. If you don’t want to get an entire security system, you can buy systems to control the lights in your house. You can also get smart wall outlets that replace your existing ones, which enable you to turn devices plugged into them on or off, as well as monitor energy usage. Heating and Cooling Programmable thermostats that allow you to set the temperature based on time and day have been around for more than 20 years. The next step in the evolution is a smart thermostat that’s remotely accessible and can do some basic learning based on the weather and your preferences. Figure 3.10 shows a smart thermostat. f igure 3.10 Nest smart thermostat By Raysonho @ Open Grid Scheduler/Grid Engine. CC0, https://commons.wikimedia.org/w/ index.php?curid=49900570. 106 Chapter 3 Assessing Cloud Needs Home Appliances Do you want your refrigerator to tell you when you’re out of milk? Or even better yet, do you want to have it automatically add milk to the grocery list on your smartphone? Perhaps your refrigerator can suggest a recipe based on the ingredients inside of it. Appliances like this exist today, even if they haven’t enjoyed a major popularity spike. Many consumers don’t see the value in the benefits provided, but perhaps that will change over time. Other examples include smart washing machines, clothes dryers, and ovens. Small appliances can be part of the IoT as well, if properly equipped. You could control your smart coffee maker from bed, starting it when you get up in the morning. Even toasters or blenders could be made into smart devices. It will all depend on the consumer demand. Modern Cars Thirty years ago, cars were entirely controlled through mechanical means. The engine powered the crankshaft, which provided torque to rotate the wheels. When you turned the steering wheel, a gear system turned the wheels in the proper direction. There were even these nifty little handles you could turn that would lower and raise the windows. Not today. While there are clearly still some mechanical systems in cars—most still have gas engines, for example—the cars made today are controlled through an elaborate network of computers. These computers determine how much gas to give the engine based on how hard you press the accelerator, how far to turn the wheels when you crank the steering wheel, and how fast to slow down when you hit the brakes. They can even do smarter things, such as transfer power to certain wheels in slippery conditions, sense how hard it’s raining to adjust the wiper speed, and warn you if there is a car in your blind spot. Many drivers love their GPS systems, and these are integrated too. Other new enhancements to cars include collision avoidance systems, adaptive cruise control, automated parallel parking, interactive video displays, and more. While much of this innovation is great, it can make cars more expensive to fix if components in this network break. It can also make them susceptible to hackers, who could get in remotely through the GPS, entertainment, or communications systems. Hacking Your Car Most modern cars are basically computer networks, and with their communications, navigation, and entertainment systems, they connect to the outside world. And ever since computer networks have existed, someone has tried to hack them. Cars are no exception. While the odds of having your car hacked are pretty low, the effects could be disastrous. Hackers could shut your car off on the interstate or render your brakes inoperable. Auto manufacturers are of course aware of the threats, and in the last few years have taken steps to reduce the chances of a successful attack. Know that it’s still a possibility, though. If you do an Internet search, you can find several articles related to car hacking. One such example is https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway. Understanding Cloud Services 107 In the previous section on autonomous environments, we talked about smart cars. These are only possible thanks to the IoT. Fitness and Health Monitors Fitness and health monitors are popular for people who want to keep track of their exercise, calorie consumption, and heart rate. Most are worn on the wrist like a watch, and some are advanced enough to be classified as smartwatches. These devices too can belong to the IoT. They connect via Bluetooth, Wi-Fi, or cellular connections; collect data about us; and can transmit that data to another device. Popular companies include Fitbit, Garmin, Polar, Apple Watch, and Jawbone. Commercial Uses Any number of industries can benefit from interconnected devices. If you are out and about, take a few minutes to just look around at the variety of equipment and electronics in use today. It’s something that we rarely take the time to observe, but when you look for opportunities to embed sensors or other data collection components into things, you can find quite a few. This section explores examples for a few industries. Medical Devices The medical industry is full of equipment. If you’ve been to a hospital or clinic, you know that the amount of machinery they have is pretty impressive. While there have been strides in interoperability, not all of the machines can talk to each other or react if something goes wrong. Further, medical applications can be extended outside of the medical facility. Critical electronics such as pacemakers can be connected to a smart system, and patients can wear other devices that monitor their vital signs, such as heart rate and blood pressure. The readings can be transmitted back to an office and recorded and monitored. If something appears to be wrong, an application on the hospital computer can show an alert, notifying the medical professional of a problem. In this way, medical professionals can efficiently keep an eye on many people at once. Manufacturing Manufacturing plants have become heavily automated over the last few decades. Robots can do a lot of work, but historically, robots haven’t been very smart. They do what they are programmed to do; if something goes wrong, they aren’t able to adjust. For maintenance personnel, tracking down the problem can often be tedious and timeconsuming. The IoT can greatly help here. Sensors built into machines can monitor production rates. If a component in the machine starts to wear down or fails, the technician will know immediately what to fix. Ideally, the technician will even be able to fix it before it breaks, because the technician can tell that the part is wearing down and will fail soon. Other sensors can be attached to pallets of raw materials or produced goods, enabling instantaneous tracking of the quantity and location of inventory. Finally, perhaps a manufacturing plant has several different lines of products. A smart system could increase or decrease production on the appropriate lines based on real-time customer demand. Transportation We’ve already discussed autonomous cars and semi-trucks, which are obvious transportation plays. Another application could be in traffic signals. Have you ever 108 Chapter 3 Assessing Cloud Needs waited at a red light for what seemed like forever, when there was no traffic coming from the cross direction? Some traffic signals do have sensors to detect the presence of cars, but not all do. If you take that one step further, sensors can monitor traffic to determine where there is congestion and then make recommendations to GPS systems to reroute vehicles onto a better path. Finally, there are applications such as being able to set variable speed limits based on road conditions, electronic tolls, and safety and road assistance. Dozens of states use electronic toll collection systems. Users sign up and place a small transponder in or on their car. When they drive through the toll, a sensor records the vehicle’s presence and bills the driver. Electronic toll systems are paired with IP cameras to detect those who attempt to cheat the system. Examples of road assistance apps include OnStar, HondaLink, Toyota Safety Connect, and Ford Sync. Infrastructure In 2007, an interstate bridge in Minnesota filled with rush-hour traffic collapsed, sending cars, trucks, and a school bus plunging into the river below. Thirteen people were killed, and 145 more were injured. This tragic event is a signal that infrastructure isn’t permanent and that it needs to be repaired and upgraded periodically. The IoT can help here too. Sensors can be built into concrete and metal structures, sending signals to a controller regarding the stress and strain they are under. Conditions can be monitored, and the appropriate repairs completed before another tragedy strikes. Sensors can be built into buildings in earthquake-prone areas to help assess damage and safety. In a similar way, IoT-enabled sensors can monitor railroad tracks, tunnels, and other transportation infrastructure to keep conditions safe and commerce moving. Energy production and infrastructure can be monitored by IoT devices as well. Problems can be detected in power grids before they fail, and smart sensors can regulate power production and consumption for economic or environmental efficiency. Finally, some cities are becoming smart cities using the IoT. Through the use of sensors and apps that residents can download, cities can track traffic flows, improve air and water quality, and monitor power usage. Residents can get some interesting benefits, such as knowing if the neighborhood park is busy or being able to search for an open parking space downtown. Potential Issues As you’ve learned, thanks to the cloud, the IoT has a tremendous amount of potential for revolutionizing how people interact with the world around them, as well as how devices interact with each other. All of this potential upside comes with its share of potential challenges and issues as well: Standards and Governance As with the development of most new technologies, there is not one specific standard or governing body for the IoT. There are about a dozen different technical standards being worked on. Most of them focus on different aspects of the IoT, but there is some overlap between standards as well as gaps where no standards exist. For example, organizations such as the United States Food and Drug Administration (FDA) are working on an identification system for medical devices, and the Institute of Electrical Understanding Cloud Services 109 and Electronics Engineers (IEEE), the Internet Engineering Task Force (IETF), and the Open Connectivity Foundation (OCF) are all working on standards for communications and data transmissions. In the smart home space, devices produced by different companies might or might not follow similar standards, meaning that interoperability can be challenging. Some people suggest that governments should get involved to enforce standards, while others think that’s the worst idea possible. Generally speaking, standards have a way of working themselves out over time. Just know that if your company invests in the IoT, there might be some interoperability challenges. How the IoT will ultimately turn out is anyone’s guess, but one or two technologies will likely emerge as the winners. Data Security and Privacy Two of the reasons that governance can be such a hot topic are data security and user privacy. Stories about Random Company X suffering a data breach are all too common today. And let’s face it: the purpose of many IoT-related devices is to collect information about people and their behaviors. That raises some huge security and privacy concerns. Security Vulnerabilities Create Risk The vast majority of adults today carry at least one electronic device on them at all times, whether it be their smartphone, a smartwatch, or a fitness monitor. Because of how common they are and the potential benefits, a lot of kids have smartphones and smartwatches as well. One benefit is that parents can track their child’s location using convenient apps such as Life360, GPS Location Tracker, Canary, and mSpy. One unintended consequence is that means other people might be able to as well. In October 2017, a security flaw was exposed in certain smartwatches specifically made for children. The flaw allowed hackers to identify the wearer’s location, listen in on conversations, and even communicate directly with the wearer. For most parents, the thought of a stranger having this kind of ability is horrifying. No breach was ever reported with these devices, but the possibility is unsettling. Although security and privacy may seem like the same thing, there are differences. Data security specifically refers to ensuring that confidentiality, availability, and integrity are maintained. In other words, it can’t be accessed by anyone who’s not supposed to access it, it’s available when needed, and it’s accurate and reliable. Privacy is related to the appropriate use of data. When you provide data to a company, the purposes for which it can be used should be specified. Also, this means that companies to whom you are giving data can’t 110 Chapter 3 Assessing Cloud Needs sell that data to another company unless you’ve given prior approval. If your company is going to use consumer-facing IoT products, be sure to have security and privacy concerns covered. Data Storage and Usage IoT-enabled devices generate data—a lot of data. Because of the massive quantities of data generated, most of it gets stored in some sort of cloud-based solution. Again, this poses potential security risks. For companies, the question becomes what do they do with all of that data? Just because they have petabytes of data doesn’t mean it will do them any good. But of course, data can be incredibly valuable. Companies that figure out how to mine the data and translate it effectively into actionable insights will be ahead of the curve as compared to their competitors. Blockchain A blockchain is defined as an open, distributed ledger that can securely record transactions between two parties in a verifiable and permanent way. You might or might not have heard about blockchain, but it’s likely you’ve heard of the most famous example of blockchain use, which is Bitcoin. Bitcoin is a digital cryptocurrency—it has no intrinsic value and is not regulated by any official bank or country—that can be used to complete Internet-based transactions between two parties. Blockchain was developed initially to support Bitcoin, but it can also support other types of secure transactions including financial services records, smart contracts, and even supply chain management. It’s likely that over the next few years, additional uses for blockchain will become more common. Blockchain Principles Blockchain operates on three key principles: decentralization, transparency, and immutability. Decentralization No one organization or company owns blockchain, and the information is not stored in a central location. A copy of the full blockchain ledger is stored on all computers that participate in the blockchain. It’s truly a democratized transaction system. The decentralized nature of the technology also makes it harder to tamper with data in the blockchain. Transparency Everyone in the blockchain has access to all the data in the blockchain. The data itself is encrypted and private, but the technology and ledger are completely open. This again makes data harder to tamper with, and everyone involved is accountable for their actions. Immutability Look at nearly any article about blockchain, and you will see the word immutable. It’s just a fancy word that means unchangeable. Because of how transactions are conducted and verified (using cryptography and hashing), blockchain is immutable, which provides a high level of data integrity. If any data in the chain were to be altered, it would be painfully obvious to everyone in the chain that someone tried to alter it. Understanding Cloud Services 111 Each of these will make more sense as we look at how blockchain works. How Blockchain Works Data in a blockchain is stored in units called blocks. Blocks (transactions) are stored end to end like in a chain. Each block contains the following: Transaction information such as date, time, and amount Participants in the transaction A unique code, called a hash, to identify the transaction The hash of the previous block in the chain All data in the block is encrypted. So, if user Jane Smith makes a blockchain transaction, the data inside doesn’t say Jane Smith in plain text but is rather an encrypted user ID that will be unique to Jane Smith. Let’s look at how a transaction works. It’s also shown in Figure 3.11, because in this case a picture is worth a thousand Bitcoin. Figure 3.11 The blockchain transaction process Someone Requests a Transaction. The Transaction is Complete! Requested Transaction is Broadcasted to P2P Nodes. Nodes Validate the Transaction. New Block is Added to the Ledger. All Nodes Get New Ledger. 1 2 5 5 4 3 1. Someone requests a transaction, such as by making a purchase from a company that accepts Bitcoin payments. 112 Chapter 3 Assessing Cloud Needs 2. The requested transaction gets sent to a peer-to-peer network of computers (called nodes ) that participate in the blockchain network. 3. Using shared algorithms, the nodes validate and verify the transaction. This essentially means solving an encrypted mathematical problem, and this process can take a few days. Small transactions need to be verified by only one or two nodes, whereas large ones need to be verified by at least six nodes. 4. After verification, a new block containing the transaction information is added on to the end of the blockchain ledger. It’s done in a way that makes the data immutable (permanent and unalterable). 5. Each party in the transaction is notified that the transaction is complete. Simultaneously, the new blockchain ledger, with the new block, is transmitted to all nodes on the peer-to-peer network. Blockchain transactions carry no transaction costs—transactions are free! There may be infrastructure costs to support the hardware to participate in a blockchain network, though. Because of the lack of transaction costs, some experts speculate that blockchain could eventually replace businesses that rely upon transaction costs (such as Visa or MasterCard, or even Uber and Airbnb) to generate revenue. Let’s go back to the three principles of blockchain and see how they are relevant given the fi ve steps we just described. Decentralization is apparent, because each node contains a copy of the ledger. And, anyone can participate in the blockchain. Transparency is handled because again, every node has access to the transaction data and the entire ledger. Immutability is a little trickier. The fi rst part of immutability is that as a block gets added to the ledger, it contains a hash of the preceding block. It’s worth expanding on this “a block contains a hash of the preceding block” concept for a second. Let’s say that block 105 is complete. Block 106 gets created, and part of the data within block 106 is a hash of the encrypted block 105. In a way, block 106 contains part of block 105—at least a hash of what block 105 is supposed to be. If even a single character changes within block 105, its entire hash is different. So if someone tries to alter the data within block 105 on a node, the data in block 106 on that node also becomes corrupt. Everyone in the blockchain would know that the data was altered. That brings up the question, how do you know which data is correct? Remember that thanks to decentralization and transparency, every node has a copy of the ledger. This is the second pillar of immutability. If there’s an issue, the system self-checks. The copy of block 105 that’s contained on the majority of nodes is presumed to be the correct one, and that ledger gets sent back to the altered node, correcting its ledger. For someone to truly hack a blockchain, they would need to be in possession of 51 percent or more of all nodes. For perspective, even the largest Bitcoin mining farms have only about 3 percent of Bitcoin nodes. So, you can consider the data to be essentially immutable. With the need for a large number of peer-to-peer nodes, you can see how cloud computing helps make blockchain more feasible and manageable. All major CSPs offer blockchain services. Understanding Cloud Services 113 grab a pick and a shovel In the Bitcoin ecosystem, solving the mathematical problem to validate a new block is rewarded with a payout of a small amount of Bitcoin. In the mid-to-late 2010s, some people felt they could make money by setting up their computer to be a Bitcoin node—this is called Bitcoin mining. Originally, people used their home PCs. Then, some more hardcore miners found that the CPUs in most computers were too ineffi cient but that the graphics processing units (GPUs) in high-end video cards were better suited for mining. Once the news got out, the price of video cards skyrocketed, and it became nearly impossible to fi nd them in stores. Did Bitcoin miners strike it rich? Not really. Validating transactions takes a fair amount of computing power and time. After investing in mining hardware and paying for electricity to run the hardware, most miners didn’t come out ahead. Today, even with specialized mining hardware, casual miners might make a few cents per day—in most cases not even enough to pay for the electricity it takes to mine in the fi rst place. If you’re interested in learning more about becoming a Bitcoin node or a Bitcoin miner, visit https://bitcoin.org/en/full-node. For an interesting video of a large Chinese Bitcoin mine, take a look at https://www.youtube.com/watch?v=K8kua5B5K3I. In the meantime, it’s probably not a good idea to quit your day job. Subscription Services Cloud service providers offer several different ways to pay for their resources. The two most popular models are pay-as-you-go and subscription-based. In the pay-as-you-go model, you pay only for resources that you use. For companies that use few resources or have wildly unpredictable needs, this can be the cheaper way to go. Subscription-based pricing, or subscription services , usually offers a discount over payas- you-go models, but it also means your company is locked into a contract for the length of the subscription. Generally speaking, the longer the contract you agree to, the more of a discount you will receive. Your company either pays for subscription services up front or has a monthly fee. Reserved instances, which we talk about more in Chapter 5, are subscription-based services. Overestimating the amount of cloud resources needed and getting locked into a subscription can result in a company overpaying for cloud services. Subscriptions are available for the standard cloud resources, such as compute, storage, network, and database. It’s also becoming more and more common for software to operate 114 Chapter 3 Assessing Cloud Needs on a subscription basis. Take Microsoft Office 365, for example. Several years ago, if a company wanted Microsoft Office, it would pay a one-time per-seat license fee, get some installation CDs, and install the software. Microsoft would try to hard sell the company on upgrading when the next version came out—usually about every three years—but the company could just ignore the request. Eventually Microsoft went to cutting off updates and technical support for older versions in an effort to get people to upgrade, but that still didn’t convince everyone. Now, instead of buying Office, you essentially rent it. You have no choice if you want to use it. And instead of a one-time fee, you pay a yearly subscription fee. Microsoft isn’t the only company doing this, so we don’t mean to sound like we’re picking on it. Software companies will tout the benefits, such as always having the newest version, being able to cheaply test new options, and automatically getting updates over the Internet. Really, though, it’s a fundamental change in their business model designed to increase revenue. It’s neither good nor bad—your perspective will depend on your company’s needs—it’s just the way it is now and for the foreseeable future. Collaboration The nature of business today is that companies need to be quick and agile. This applies to everyone, from small, five-person tax preparation firms to multibillion-dollar global giants. A big enabler of agility, especially when team members are geographically separated (or work from home), is collaboration software. The cloud makes collaboration so easy that it’s almost second nature to some. Collaboration services will vary based on the package chosen, but in general, these are some of the things to expect: Shared file storage Video conferencing Online chat Task management Real-time document collaboration Using collaboration software, it’s completely normal to have five people from different parts of the world video conference together, reviewing and editing a presentation for senior management or a client. The collaboration software allows access regardless of device platform or operating system, because it’s all cloud-hosted on the Internet. Security on collaboration platforms is pretty good. Users will be required to create a user account and password and must be invited to the specific team’s collaboration space. Some collaboration packages support SSO or Federation to make managing usernames and passwords a little easier. There are dozens of collaboration software packages in the marketplace. They will have slightly different features as well as costs. Some are free for basic use but may have limited features or charge a small fee for storage. Examples include Microsoft Teams, Slack, Confluence, Podio, Quip, Samepage, and Bitrix24. Figure 3.12 shows a screenshot of the Microsoft Teams file storage area we have set up to collaborate on this book. Understanding Cloud Services 115 Figure 3.12 Microsoft Teams VDI Virtual desktop infrastructure (VDI) has been in use since about 2006, so it predates the cloud. It’s another example of a service that the cloud has made easier and cheaper to manage. In a VDI setup, the administrator creates a user desktop inside a VM located on a server. When users log into the VDI, they are presented with their OS’s graphical user interface (GUI) just as if they were logged into a local machine. Administrators can set up the virtual desktops in a few different ways. One is to let users customize their desktops just as they would on a personal device. The other is to make it so the desktops are effectively locked from modifications. It will always look the same every time someone logs in. The latter scenario can be useful if, for example, salespeople need to log into a kiosk within a store to complete a sale. Admins can also set up desktop pools, which are groups of similar desktops. For example, there could be one for the finance department with the software they need and a separate one for human resources with their software packages. Technically speaking, a VDI is specifically run by a company within its internal data center—it’s not cloud-based. CSPs may offer VDI under the name desktop as a service. It’s the same concept as VDI—the only difference is where the VMs are hosted. As Citrix executive Kenneth Oestreich quipped, DaaS is “VDI that’s someone else’s problem.” 116 Chapter 3 Assessing Cloud Needs The terms VDI and DaaS often get used interchangeably, even though they are different things. We’ll continue to refer to the technology as VDI for ease and to match the CompTIA Cloud Essentials+ objectives. For purposes of the CompTIA Cloud Essentials+ exam, assume that VDI and DaaS are one and the same— unless you get a question on where one or the other is hosted. VDI Benefits From an end-user standpoint, the single biggest benefi t of VDI is ease of access. The user can be anywhere in the world, and as long as they have an Internet connection, they can log into their desktop. Users also have the benefi t of accessing their virtual desktop from any type of device, such as a desktop, laptop, tablet, or smartphone. Companies benefi t from using VDI because of centralized management, increased security, and lower costs. All desktops are managed centrally on a server. Any changes that administrators need to make, such as adding or updating software, can all be handled quickly and easily. Security is increased because all user files are stored on the server as opposed to on a laptop or PC. If a device were to get stolen, there is less risk of losing the data. In theory, VDI means that companies need to buy less hardware for their users, thereby reducing the total cost of ownership (TCO). How VDI Works To understand how VDI works, you need to dig up some of the technical concepts you learned in Chapter 1. VDI relies upon VMs that need a hypervisor. The virtual desktops are hosted on the VMs. VDI uses another piece of software called a connection broker that manages security of and access to the virtual desktops. Figure 3.13 illustrates a simple VDI system. f igure 3.13 A simple VDI system Hypervisor Virtual Desktops Connection Broker Desktop Laptop Mobile Understanding Cloud Services 117 Self-Service Finally, the last benefi t of cloud services that you need to remember is self-service. We introduced self-service (or on-demand self-service) in Chapter 1 when we covered cloud characteristics. If you’ll recall, it simply means that users can access additional resources or services automatically, 24 hours a day, 7 days a week, 365 days a year, without requiring intervention from the CSP. Figure 3.14 shows the AWS S3 (storage) management screen. If another storage bucket is needed, all we have to do is click the Create Bucket button, set up the characteristics such as security and encryption, and create it. The whole process can be done in about 30 seconds, and supplier intervention isn’t needed. f igure 3.14 AWS S3 buckets management screen There are 11 cloud services that you need to be able to identify for the CompTIA Cloud Essentials+ exam, and some of them have multiple components. Be familiar with identity access management (including single sign, multifactor authentication, and federation), cloud-native applications (microservices and containerization), data analytics (machine learning, artificial intelligence, and big data), digital marketing (email campaigns and social media), autonomous environments, IoT, blockchain, subscription services, collaboration, VDI, and self-service.