Chapter 1: Introduction to Data Centers PDF
Document Details
Uploaded by SkilledMemphis7081
Tags
Summary
This document provides an introduction to data centers, including their definition, different types, components, operation principles, and challenges faced by the data center managers. It covers topics such as data center strategic planning and operational issues. The document is useful for those looking to start learning about the topic of data centers.
Full Transcript
CHAPTER-1 INTRODUCTION TO DATA CENTERS 1.1What is DATA CENTER? Data center definition: The term “data center” means differently to different people. According to The U.S. Environment Protection Agency defines a data center as: “Primarily electronic equipment used for data processing(serve...
CHAPTER-1 INTRODUCTION TO DATA CENTERS 1.1What is DATA CENTER? Data center definition: The term “data center” means differently to different people. According to The U.S. Environment Protection Agency defines a data center as: “Primarily electronic equipment used for data processing(servers), data storage (storage equipment), and communications (network equipment). Collectively, this equipment processes, stores, and transmits digital information”. Data center strategic planning forces Fig: Example: Data centers are involved in every aspect of life running Amazon, AT&T, CIA, Citibank, Disneyworld, eBay, FAA, Facebook, FEMA, FBI, Harvard University, IBM, Mayo Clinic, NASA, NASDAQ, State Farm, U.S. Government, Twitter. 1.2Types of data centers It is more cost-efficient and energy-efficient to build out these facilities in an incremental fashion. As a result, many providers have developed what they refer to as “modular” data centers. At the present time, there are five categories of data centers that are generally considered to be “modular” within the marketplace. Types of data centers 1. Traditional Design 2. Monolithic Modular (Data Halls) 3. Containerized 4. Monolithic Modular (Prefabricated) 5. Scaling Data Centers What is modular data center? Def: A critical understanding of the definition of a Modular Data Center (MDC) is that it can have no compromises with respect to a standard data center. 1.3Essential Data Center Components Modern data centers rely on powerful technologies to manage an organization's data. Key data center components include: Servers Racks Network connectivity infrastructure Security measures and appliances Monitoring structures Storage infrastructure Cooling and air flow systems (as well as fire protection) Policies to maintain efficiency, security and performance Data centers typically encompass a full building, part of a building, or for large corporations, multiple buildings where computer systems and data reside alongside other necessary equipment. Different tiers and standards exist, from a basic server room to a truly robust environment with fully redundant systems that can operate uninterrupted for an indefinite time despite power outages. 1.4 Data center principles There are Three Principles of Data Center Design. When you understand the three principles of data center design, you are able to: Lower your total cost of ownership Support your future growth plans Reduce your risk of downtime Maximize performance Improve your ability to reconfigure 1.4.1 Principle 1: Space Savings Data center racks and equipment can take up an enormous amount of real estate, and the future demand for more network connections, bandwidth and storage may require even more space. With insufficient floor space as the topmost concern among IT managers today, maximizing space resources is the most critical aspect of data center design. 1.4.2 Principle 2: Reliability Uninterrupted service and continuous access are critical to the daily operation and productivity of your business. With downtime translating directly to loss of income, data centers must be designed for redundant, fail-safe reliability and availability. Depending on the business, downtime can cost anywhere from $50K to over $6 million per hour. 1.4.3 Principle 3: Manageability Manageability is key to optimizing your data center. The infrastructure should be designed as a highly reliable and flexible utility to accommodate disaster recovery, upgrades and modifications. Manageability starts with strategic, unified cable management that keeps cabling and connections properly stored and organized, easy to locate and access, and simple to reconfigure. 1.5 Data center operational issues Listed below are the top 5 challenges faced by data center managers while implementing and managing data centers. Challenge #1: Real-time Monitoring and Reporting Data centers have a lot going on inside them, so unexpected failures are inevitable. There are applications, connecting cables, network connectivity, cooling systems, power distribution, storage units, and much more running all at once. Constant monitoring and reporting different metrics is a must for data center operators and managers. A DCIM system provides deeper insights into the data center operations and performance metrics. It helps you track, analyze, and generate reports real-time, so you’re capable of taking well- informed decisions and immediate actions accordingly. Challenge #2: Capacity Planning and Management More often than not, data center managers tend to over- provision so as to avoid downtimes. This results in wastage of resources and space. Also, causing power and energy wastage. With the increase in data, the capacity of a data center was always an unsolved question for data center managers. But only until the data center infrastructure management solution was discovered. By using a DCIM system, data center managers can find out the idle physical space, capacity, power, cooling, and other resources used in a data center. This way, it’s easy to optimize the capacity as well as cut costs, save energy, and avoid downtime, at the same time. Challenge #3: Uptime and Performance Maintenance Measuring the performance and ensuring uptime of data centers is the major concern for data center managers and operators. This also includes maintaining power and cooling accuracy and ensuring the energy efficiency of the overall structure. Manually calculating the metrics is of no or a very little help in most cases. A powerful tool like a DCIM system, helps you, as a data center manager, to measure the essential metrics like Power Usage Effectiveness (PUE) in real-time, making it easy for you to optimize and manage the uptime and other performances. Challenge #4: Energy Efficiency and Cost Cutting The data center (DC) sector, in particular, is estimated to account for 1.4% of the global electricity consumption. (Source: MDPI) The data center industry is often caught in trouble because of its massive amount of energy consumption and rising temperature problem. Many a time, more energy is found to be wasted than used at a data center site. This happens when there is lack of proper energy monitoring tools and environmental sensors. With a DCIM system, you can monitor energy consumption efficiently, optimize it, and also cut costs where possible. This will help you reduce your data center costs and be energy efficient and environment-friendly, all at once. Challenge #5: Staff Productivity Management Tracking, analyzing, and reporting performances of all the data center infrastructure is a daunting task. It may take a lot of your staff’s time and efforts. Yet, you can’t ensure accuracy when monitoring is carried out manually. A DCIM solution will help you automate these operations and free a lot of your staff’s time. With the developing technology, applications, data, and devices, data centers are subjected to continuously evolve with it. 1.6 Operations Management and Disaster Management Some of the best practices in operations management include applying: ISO standards air management cable management preventive and predictive maintenance 5S (Sort, Straighten, Scrub, Standardise and Sustain) disaster management, and training Layout for Data center Fig: 1.6.1ISO Standards To better manage your data centers, operations management adheres to international standards, so to “practice what you preach.” Applicable ISO standards include the following: ISO 9000: Quality management ISO 14000: Environmental management OHSAS 18001: Occupation Health and Safety Management Standards ISO 26000: Social responsibility ISO 27001: Information security management ISO 50001: Energy management ISO 20121: Sustainable events 1.6.2 Computerized Maintenance Management Systems Redundancy alone will not prevent failure and preserve reliability. A computerized maintenance management system (CMMS) is a proven tool, enhanced with mobile, QR/barcoding, or voice recognition capabilities, mainly used for managing and up-keeping data center facility equipment, scheduling maintenance work orders, controlling inventory, and purchasing service parts. Generally, CMMS encompasses the following modules: Asset management (Mechanical, Electrical, and Plumbing equipment) Equipment life cycle and cost management continuation Spare parts inventory management Work order scheduling (man, machine, materials, method, and tools): – Preventive Maintenance (e.g., based on historical data and meter reading) – Predictive Maintenance (based on noise, vibration, temperature, particle count, pressure, and airflow) – Unplanned or emergency services Depository for Operations and Maintenance manual and maintenance/repair history 1.6.3 Cable Management Cabling system may seem to be of little importance, but it makes a big impact and is long lasting, costly, and difficult to replace. It should be planned, structured, and installed per network topology and cable distribution requirements as specified in TIA-942-A and ANSI/TIA/ EIA-568 standards. The cable should be organized so that the connections are traceable for code compliance and other regulatory requirements. Poor cable management could create electromagnetic interference due to the induction between cable and equipment electrical cables. To improve maintenance and serviceability, cabling should be placed in such a way that it could be disconnected to reach a piece of equipment for adjustments or changes. Ensure cable management “discipline” to avoid “out of control, leading to chaos ” in data centers 1.6.4 The 5S Pillars 5S is a lean method that organizations implement to optimize productivity through maintaining an orderly workplace. 5S is a cyclical methodology including the following: Sort: eliminate unnecessary items from the workplace. Set in order: create a workplace so that items are easy to find and put away. Shine: thoroughly clean the work area. Standardize: create a consistent approach with which tasks and procedures are done. Sustain: make a habit to maintain the procedure. 1.7 Training and Certification Planning and training play a vital role in energy-efficient design and the effective operation of data centers. The U.S. Department of Energy offers many useful training and tools. Data Center Energy Practitioner (DCEP) Program offers data center practitioners with different certification programs 1.8 Key lessons learned from the aforementioned natural disasters are highlighted as follows: Detailed crisis management procedure and communication command line. Conduct drills regularly by emergency response team using established procedures. Regularly maintain and test run standby generators and critical infrastructure in a data center. Have contract with multiple diesel oil suppliers to ensure diesel fuel deliveries. Fly in staff from nonaffected offices. Stock up food, drinking water, sleeping bags, etc. Have different communication mechanisms such as social networking, web, and satellite phones. Get required equipment on-site readily accessible (e.g., flashlight, portable generator, fuel and containers, hoses, and extension cords). Brace for the worst—preplan with your customers on communication during disaster and a controlled shutdown and DR plan. Review Questions 1.Define Data center. 2.Draw the diagram for data center strategic planning forces. 3.List down the types of data centers. 4.What is modular data center? 5.Explain about the essential data center components. 6.Explain about the data center principles. 7.List and explain the 5 data center operational issues. 8.What are the best practices in operation management? Explain them in detail. 9.What are the key lessons learned from the natural disaster? DATA CENTER DESIGN AND ADMINISTRATION CHAPTER-2 MODULAR DATA CENTERS, ENERGY AND SUSTAINABILITY IN DATA CENTERS 2.1 INTRODUCTION MODULAR DATA CENTER DEFINITION: Modular data centers are a type of data center infrastructure that is composed of pre-fabricated modules or units. These modules can be easily transported, assembled, and configured on-site, providing a more flexible and scalable solution compared to traditional data centers. It must apply with respect to usability, reliability, life expectancy, and the primary purpose to house, secure, power, cool, and provide connectivity for any type of it device that would need to be deployed. 2.2 MDC BENEFITS AND APPLICATIONS Using pre-assembled modules to build out a data center can save a company’s assets of people, time, power, and money. These benefits most often occur when the MDC is aligned to the size, timetable, and scalability, flexibility, cost effectiveness, reliability level of the it it is built to support. This can be true whether deployed as single non redundant modules of 50 or 200 kw, 2 n power and cooled modules of 300kw and 1.5MW, and even aggregating these modules into 20 MW+ enterprise class data centers. CONTINUATION MDC components are pre-engineered and preassembled at their source so the client does not manage the schedule for all the subsidiary elements in those modules. Scheduling concerns based on the availability of raw steel, copper, engine blocks, compressors, electrical components, and the like, including skilled labour for each, are removed from the client’s responsibility. Not having to preplan around these interdependent schedules can allow time savings of more than a year, including the ability to even push out the start of the construction process to more closely align with future business needs. BENEFITS One of the largest benefits of the MDC comes from the application that co-delivers IT and facilities. Traditionally, data centres had been described as “total square area of raised floor space” or defined in capacity by “power per area” metrics. This is shifting to a “power per rack” and total rack availability to best define IT needs. 2.3 MODULARITY SCALABILITY PLANNING Starting a construction project is a major undertaking for a business at any scale due to the number of people involved, the time from planning to operation, the external regulatory and non-governmental influences, and the sheer size of the capital spent. Figure 4.3 compares the cost of a large on-site construction project, often called a monolithic data center approach (solid lighter line), to a modular approach (solid darker line) 2.4 SITE PREPARATION, COMMISSIONING Data center design professionals made sure there was plenty of room to locate all of the electrical conduit under the MDC site. These electrical duct banks whether underground or overhead follow all the same designs standards as if this were going to be used with a site- built data center of the same scope. CONTINUATION For an MDC design, the layout and site prep must also allot for the clearance required to get larger prebuilt modules into place. A traditional DC project needs a detailed timeline to complete all of the serial nature of the tasks. An MDC project is much the same from a planning perspective. It is just that many of the parts can be done in parallel at multiple sites and just come together at the final site. The challenge then is to optimize the schedule and make all of these elements come together at the same time just when needed. CONTINUATION A large site-built data center project can take 18 months due to the serial nature of the process and based on how long each of the individual elements takes because of all the on-site labor Commissioning Commissioning: Commissioning site-built data centers is such a normal practice, and it rarely leads to questions on how, when, or who does it. MDCS offer a unique opportunity to the commissioning activity that could further reduce costs and shorten lead time if properly accounted for. As when it comes to commissioning work, the first question to consider is where to commission and at what load for it, mechanical, and electrical. Ideally, the IT vendor would load common software benchmark tools like linpack right onto the real preconfigured IT and test at the factory and again at the site. But it is possible the client software prevents that, so that means site level commissioning may be limited to the testing at an IT kw stress level below the design point. 2.5 HOW TO SELECT AN MDC VENDOR The key to select the right partner to build an MDC is to start with the definition of what you want your IT and data center to do over a 12–36 month period. If forecasting is not available, select a vendor that can help with forecasting. Look for proven experience with positive client references. Look for a stable company that will be around at least as long as your warranty period. Determine whether you have a comfort factor and experience with running a construction project. If not, look for a vendor that can provide a turnkey solution or best solution that meets your need. Conduct a request for information (rfi) based on that criteria to narrow down the field if you have a lot of uncertainty on what type of data center you need. 2.6 EXTERNAL FACTORS The funding of a new data center or data center capacity expansion can have significant budgetary and tax implications based on how the technology is viewed both by the tax code and by corporate finance policies. The nature of MDC solutions can blur the lines between it equipment and facility infrastructure in many areas. As a result, this construction approach can lend additional flexibility to organizations to utilize the funds that are available to accomplish the organization’s goals. 2.7 MODULARITY IN DATA CENTERS As a new data center goes live, there is typically a period of time where the IT equipment is ramping up to its full potential. This will most often come in the form of empty raised floor space that is waiting for build-out with servers, net- working, and storage gear. Even after the it gear is installed, there will be time period before the utilization increases, drives up the power consumption, and intensifies heat dissipation of the it gear well beyond minimum ratings. In some data centers, this might take a few months, and with others, even longer. 2.8 TYPES OF FLEXIBLE DATA CENTERS So what does a flexible data center look like? The needs of the end user will drive the specific type of design approach, but all approaches will have similar characteristics that will help in achieving the optimization goals of the user. o Containerized data centers o INDUSTRIALIZED DATA CENTER o Cloud-based data centers o Hyperscale data centers o Edge data centers o Micro data centers 2.8.1.CONTAINER DATA CENTERS This is typically what one might think of when discussing modular data centers. Containerized data centers were first introduced using standard 20 and 40-ft shipping containers. Newer designs now use custom-built containers with insulated walls and other features that are better suited for housing computing equipment. Since the containers will need central power and cooling systems, the containers will typically be grouped and fed from a central source. Expansion is accomplished by installing additional containers along with the required additional sources of power and cooling. 2.8.2.INDUSTRIALIZED DATA CENTER This type of data center is a hybrid model of a traditional brick-and-mortar data center and the containerized data center. The data center is built in increments like the container, but the process allows for a greater degree of customization of power and cooling system choices and building layout. The modules are connected to a central spine containing “people spaces,” while the power and cooling equipment is located adjacent to the data center modules. Expansion is accomplished by placing additional modules like building blocks, including the required power and cooling sources 2.8.3.TRADITIONAL DATA CENTER Modular planning and design philosophies can also be applied to traditional brick- and-mortar facilities. However, to achieve effective modularity, tactics are required that diverge from the traditional design procedures of the past three decades. The entire shell of the building must accommodate space for future data center growth. The infrastructure area needs to be carefully planned to ensure sufficient space for future installation of power and cooling equipment. Also, the central plant will need to continue to operate and support the IT loads during expansion. If it is not desirable to expand within the confines of a live data center, another method is to leave space on the site for future expansion of a new data center module. 2.9 WATER USE Water use in a data center is a very important, and a typically understated environmental challenge. The greatest amount of water consumed in a data center is not the potable water used for drinking, irrigation, cleaning, or toilet flushing; it is the cooling system, namely evaporative cooling towers and other evaporative equipment, and to a lesser extent, humidification. CONTINUATION In addition to the water consumption that occurs at the data center (site water use), a substantial amount of water is used at the electricity generation facility in the thermo electrical process of making power (source water use). There are many variables that will contribute to the ultimate water consumption. Also the local availability of water and limitations on the amount of off-site water treatment available (the water has to go somewhere after it is used) will play into the decision and may require a less energy efficient system in order to avoid using water on-site. 2.10 AVOIDING COMMON PLANNING ERROR When constructing a new or retrofitting an existing datacenter facility, there is a window of opportunity at the beginning of the project to make decisions that can impact long-term energy use, either positively or negatively. Since the goal is to achieve a positive outcome, there are some very effective analysis techniques available to gain an understanding of the best optimization strategies, ensuring you’re leaving a legacy of energy efficiency. CONTINUATION Energy is not the only criterion that will influence the final design scheme, and other conditions will affect energy usage in the data center: location, reliability level, system topology, and equipment type, among others. 2.11.COOLING SYSTEM CONCEPTS In a data center, the HVAC system energy consumption is dependent on three main factors: outdoor conditions (temperature and humidity), the use of economization strategies,and the primary type of cooling. Simple terms, the HVAC equipment takes the heat from the data center and transfers it outdoors. The higher the outdoor air temperature (and the higher the humidity level is for water-cooled systems), more work is required of the compressors to lower the air temperature back down to the required levels in the data center. HVAC - heating, ventilation and air conditioning of the space CONTINUATION Eco-nomization for HVAC systems is a process in which the outdoor conditions allow for reduced compressor power (or even allowing for complete shutdown of the compressors). Different HVAC system types have different levels of energy consumption. And the different types of systems will perform differently in different climates. As an example, in hot and dry climates water-cooled equipment generally consumes less energy than air cooled systems. Conversely, in cooler climates that have higher moisture levels, air-cooled equipment will use less energy. The maintenance and operation of the systems will also impact energy (possibly the greatest impact). Related to the cooling system type, the supply air temperature and allowable humidity levels in the data center will have an influence on the annual energy consumption. 2.12 AIR MANAGEMENT AND CONTAINMENT STRATEGY Proper airflow management creates cascading efficiency through many elements in the data center. If done correctly, it will significantly reduce problems related to re- entrainment of hot air into the cold aisle, which is often the culprit of hot spots and thermal overload. CONTINUATION There are several approaches that give the end user options to choose from that meet the project requirements. 1. Passive chimneys mounted on it cabinets 2. Hot-aisle containment 3. Cold-aisle containment 4. Self-contained in-row cooling 5. Water-cooled computers 6. Water-cooled computer REVIEW QUESTIONS 1.What is modular data center? 2.Explain about MDC benefits and applications. 3.What is modularity scalability and planning? 4.Write a short note on site selection and commissioning. 5.List down the various ways to select an MDC vendor. 6.What is modularity in data centers? 7.Explain about the 3 types of data centers. 8.Write about water usage while constructing the data centers. 9.What are the common errors to avoid while constructing the data centers? 10.Write a short note on cooling system concepts. 11.Explain about the air management and containment strategy. Data Center Infrastructure considerations 3.1 Racks, Cabinets and Support infrastructure Data centers employ a wide variety of racks, enclosures and pathway products such as cable trays and ladder racking. There are numerous variables to consider when selecting the right product. They must all in some way, individually and collectively, support four key areas. These areas include: Climate control, namely cooling and humidity Power management Cable management Security and monitoring. continuation In a corporate data center, a typical enclosure might house 12 to 24 servers with a switch and a monitor generating a heat load in excess of 4,500 watts. It is easy to understand how cooling problems can arise in such a scenario. Even with computer room cooling and a fan at the top of the cabinet, there can be a wide disparity in temperature at the top versus the bottom of the enclosure. continuation There are many unique designs and innovative approaches that can help confirm neat, manageable bundling and routing of cables with mechanical protection, stability and flexibility. Data centers also vary widely in their approach to inter- cabinet or inter-rack cable distribution. Overhead ladder rack allows for more convenient access to cables, making moves, adds and changes easier. On the other hand, under floor runway provides more security and cleaner aesthetics though future changes and a blockage of airflow may become an issue. 3.2 Redundancy and Path Diversity Redundancy and path diversity are considerations in CDC and IDC environments for power, cabling, Internet access and carrier services Tolerance for downtime measured against added equipment costs and “support area-to-raised floor” ratios must be closely examined and matched. Based on these criteria, data center developers and operators design and implement different infrastructures. continuation Corporate data centers must carefully weigh the cost of downtime with respect to their revenue model. If outages prevent order entry, particularly during peak conditions or seasonal surges, business management must confirm that IT provides the availability and reliability to consistently meet those demands. continuation In the IDC arena, many operators must strive to meet or exceed the legendary “five nines” of the public telephone network. In a nutshell, “five nines” reliability equates to slightly more than five minutes downtime annually for 24- hour service levels. It is a challenge in the data world to achieve that level of reliability, yet that is the customer expectation. This desired level of stability creates a cascading effect on capital investment, usable revenue generating floor space versus support equipment space and dollars invested per square foot. 3.3 Security Security In Data Centers Data centers are the lifeblood of information. Company and customer data should be treated like money in a bank vault. Corporate and Internet data centers must take definitive measures to limit access only to authorized personnel, and confirm use of proper fire prevention and life safety systems while minimizing the potential of equipment damage. Video surveillance(CCTV) and biometric or card access control are often sufficient in CDCs, but in IDCs, where personnel may often come from many different companies (sometimes from the competition), additional security is required. Besides perimeter-type security, compartment security is recommended via locked cabinets or collocation cages, and additional provisions for cable raceway and raised floor access become necessary to achieve customer comfort levels. In addition, real-time personnel and asset tracking may be used. 3.4 Storage Storage in both corporate and Internet data centers may migrate to the storage area network (SAN) model over time as the volumes of stored data escalate and the management of content becomes more challenging. Additional or complementary connectivity concerns must be addressed in the data center design to accommodate for flexibility and the most efficient and effective use of space. continuation The use of Fibre Channel technology and 50-micron multimode optical cabling may cause the re-evaluation of overall distribution design. As other data link level transport methods (such as 10 Gigabit Ethernet) are evaluated and/or standardized for use in SANs, there may be an advantage to using the same fiber type to interconnect storage systems and servers throughout the data center. 3.5 Flexible and adequate Connectivity Adequate connectivity is key to bringing users online quickly and efficiently, whether in a corporate or IDC environment. Time is money, whether provisioning new data center customers, upgrading their bandwidth or services, or providing quick, coordinated and efficient moves, adds and changes service. Choice of media in the data center may be more critical than in other wired areas, just as equipment reliability and redundancy is more critical in a hospital operating room than in business offices. Performance, flexibility, headroom, patching and error- resistance are all variables in the same crucial design formula. 3.6 Choosing the appropriate Cabling media Choosing the appropriate cabling media can affect many aspects of data center design. An entire project or facility must be considered with respect to systems and manufacturer connectivity requirements not only for present day needs but also for future requirements. 3.7 Data Center Cabling Considerations A data center cabling system may require a balance of both copper and fiber to cost effectively meet today’s needs and support the high-bandwidth applications of the future. Rapidly evolving applications and technologies are drastically increasing the speed and volume of traffic on data center networks. Ensuring that a cabling solution is designed to accommodate the higher transmission rates associated with these evolving bandwidth intensive applications is critical. The structured cabling architecture commonly used for data centers requires a reliable high-performance, high-density design with flexible guidelines for quick installation, future readiness and easy-to-use applications. The following points must be considered and prioritized: In order to address these criteria, the following must be considered and prioritized: The sophistication of the network applications The kind of traffic expected on the various portions of the data center network based on number of users, data transfer requirements of each user, LAN architecture, etc. The life expectancy of the network and cabling infrastructure The frequency of moves, adds and changes The growth potential of the network over its expected life Any adverse physical conditions in the customer’s data center Any access restrictions whether physical or time related 3.8 Cabling infrastructure: Copper vs. fiber Cable and Connectivity Considerations Choices between fiber, copper or the selective use of both depend on a variety of criteria including: Bandwidth and performance required per data center area Immunity to Electromagnetic Interference (EMI) and Radio Frequency Interference (RFI) Need for density and space saving connectivity Flexibility and speed of reconfiguration Device media interface considerations Standardization Future vision and tolerance for recabling Cost of electronics. 3.8.1.The Case for fiber Fiber can provide numerous advantages over copper in a data center environment. Fiber systems have a greater bandwidth and error-free transmission over longer distances allowing network designers to take advantage of new data center architectures. Cost of fiber optic solutions is comparable with extended performance copper cabling (Category 6A/ISO Class EA and Category 7/ISO Class F). Optical data center solutions are designed for simple and easy handling and installation. Fiber systems are easier to test. Optical fiber is immune to EMI/RFI. Faster installations (up to 75 percent faster) offer time and cost savings to data center developers, operators and contractors. High-density fiber optic systems maximize valuablespace. Fiber’s small size and weight requires less space in cable trays, raised floors and equipment racks. As a result, smaller optical networking provides better under-floor cooling and gives precious real estate back to the data center. continuation Fiber has the ability to support higher data rates,taking advantage of existing applications and emerging high-speed network interfaces and protocols.Multimode fiber optics support: - 10 Mbps/100 Mbps/1 Gbps/10 Gbps Ethernet - 1/2 /4 Gbps Fibre Channel Fiber provides “future proofing” in network infrastructure. Laser optimized 50-micron/ISO OM3 fiber is generally recommended by storage area network manufacturers because of its higher bandwidth capabilities. The next IEEE proposed Ethernet speed will be for 40 Gbps and 100 Gbps only over fiber and a short distance of 10 meters over copper. 3.8.2 why Pre-terminated fiber Systems? Up to 60 percent smaller optical networking solution freeing up raised floor and racking space Get online quicker with up to 75 percent faster installation Modular design for faster moves, adds and change Defined accountability, built-in compatibility Factory terminations Consistent results from an ISO 9001 / TLQ 9000 quality architecture Systems are factory tested before shipment, reducing risk Eliminates variability in material cost 3.9.Electrical Power System Considerations An on-time, trouble-free, start-up of a data center facility requires detailed preplanning of the entire electrical power system. Careful integration of the primary electrical system with UPS, battery and other backup systems is essential. Because of the large number of interrelated factors involved, the detailed design of the entire electrical power supply system–including cables, cable accessories and all other components of the system –is usually best completed by a professional electrical design group experienced in data center electrical power supply systems. 3.10.Energy efficiency and Environmental Considerations High-voltage DC power 20 - 40% more efficient than A/C conversion* Reduces fossil fuel consumption Cable plant media and design Increases air flow Improves cooling efficiency equipment layout Promotes thermal management and cooling efficiency continuation Deployment Reduces waste at the job site Improves construction efficiency monitoring / management Environmental – Improves operational efficiencies Infrastructure – Asset utilization, business resiliency, security alternative Power Solar Wind Fuel cells 3.11.Intelligent infrastructure management (IIM) within a Data Center environment Organizations are increasingly dependent on their IT networks to provide a competitive advantage as well as meet both customer and shareholder needs. 3.12.The impact of intelligent infrastructure management on data Center operations Reduced cost and increased effectiveness of IT operations Saves time and manpower required for planning and implementing change, performing audits and responding to faults Optimization and automation of the change control process along with the elimination of configuration errors Real-time location and tracking of all network assets can drastically improve effectiveness of existing asset management tools Provides audit trails and reporting capabilities (for Sarbanes Oxley) More effective use of network resources–confirms optimum switch port utilization thereby avoiding unnecessary purchases Key personnel’s core skills can be extended across the entire enterprise network, ensuring a reduced need for local familiarity Governance of sub-contractors and outsourcing service providers–confirms SLAs (Service Level Agreements) and billing are actively monitored and controlled 3.13.Factors you Should Consider when Building a Data Center With so many variables to consider when building a data center, today’s IT and business leaders are cutting new trails through uncharted territory. The next few years will unveil the developments, risks and rewards of the new economy as entrepreneurial innovation and the digital age take root, powered by business technology, the Internet and brought to you through 21st century data centers around the world. Selecting the right infrastructure for your data center will be key to maintaining uptime 24/7, year round continuation Professional engineering Power requirements and cabling Adequate cooling Efficient allocation of space Proper racking, enclosures, pathways and access flooring Redundancy and path diversity Security Storage Flexible and adequate connectivity Copper or fiber cabling and management KVM switches Intelligent Infrastructure Management solutions Review Questions 1.Write a short note on racks, cabinets and support infrastructure. 2.Explain about redundancy and path diversity. 3.What is security in data centers. 4.Write a short note on storage in data centers. 5.What is flexibility and adequate connectivity? 6.Explain about the data center cabling considerations. 7.Compare copper Vs fiber cable considerations. 8.What are the advantages of fiber over the copper in data center environment? 9.Why pre-terminated fiber systems are used? 10.Write a note on energy efficiency and environmental considerations. 11.What is intelligent infrastructure management?and what are the impacts of intelligent infrastructure management on data centers operations. 12.What are the factors to consider when building a data center? DATACENTER DESIGN & ADMINISTRATION CHAPTER-4 ENVIRONMENTAL CONTROLS, PROJECT MANAGEMENT AND COMMISSIONING 4.1 DATA CENTER POWER TRENDS In recent years, data center facilities have witnessed rapidly increasing power trends that continue to rise at an alarming rate. The combination of increased power dissipation and increased packaging density has led to substantial increases in chip and module heat flux. As a result, heat load per square feet of server footprint in a data center has increased. Recent heat loads published by ASHRAE as shown in Figure 18.1 indicate that for the period 2000–2004, heat load for storage servers has doubled, while for the same period, heat load for computer servers has tripled 4.2 THERMAL MANAGEMENT OF DATA CENTERS This rapid increase in the heat load per server footprint has resulted in an equal increase in the research on how best to tackle this problem. Some of those topics are summarized as follows: The cooling system configuration The structural parameters of the placement of the computer room air conditioning (CRAC) unit The energy management Data center metrics Modelling of data centers Experimental investigations of data center systems CONTINUATION The increased energy consumption has equally opened up more opportunities for energy savings and efficient operations of the data centers. 4.3.PERFORMANCE METRICS There are number of metrics that are used by data center professionals to measure the effectiveness of system performance. The PUE is the most commonly used metric, which was defined by the green grid as the ratio of total energy consumption of the data center to total energy consumed by it equipment. The ideal PUE (power usage effectiveness () is 1.0 where all the energy supplied to a data center is consumed by the IT equipment. CONTINUATION Another metric defined by the green grid is the water usage effectiveness (WUE), which can be an indicator of how efficiently water is being consumed by the facility. It is defined as ratio of annual site water consumption to annual energy consumption of it equipment. 4.4.DATA CENTER PROJECT MANAGEMENT AND COMMISSIONING The project management section will address on the following: project initiation planning execution monitoring closeout 4.4.1 COMMISSIONING Commissioning in a data center refers to a comprehensive process that ensures all systems, components, and equipment are designed, installed, and tested according to the specified requirements. The commissioning section of this chapter will address the following: what is commissioning? Why commission a building? Why commission a data center? Selecting a commissioning firm. Project management and commissioning. Equipment and systems to be commissioned. Continuation Commissioning tasks. Leadership in energy and environmental design (leed)required commissioning. Commissioning team members. Data center trends. 4.5.PROJECT MANAGEMENT Project management is defined as being the process of planning, organizing, motivating, and controlling resources to achieve specific goals (fig. 19.1). The primary challenge of project management is to achieve all of the commissioning effort project goals and objectives as defined in the commissioning scope of work document while staying within the constraints of time, quality, and budget. Due to the importance of meeting the goals and objectives of the commissioning tasks defined in the scope of work document, it is vitally important to have a strong project manager with good communications skills, project management skills, and experience with managing the commissioning effort of a data center construction project. 4.5.PROJECT MANAGEMENT STAGES CONTINUATION The key to properly managing a commissioning project is to have a quality assurance–quality control plan in the form of a project management control system in place prior to initiation of the project. The project management control system should consist of the following stages: project initiation, planning, execution, monitoring, and closeout. 4.5.1.PROJECT INITIATION STAGE A data center project from a commissioning point of view is initiated when the commissioning firm and the client have agreed on the exact scope of work for the project, a contract has been fully executed, and all of the internal paperwork has been completed. It is at this point the project manager is to begin the project initiation phase of the commission effort. CONTINUATION The purpose for contacting the architect or construction manager/general contractor is for introduction as well as asking questions and gathering the information required to begin the commissioning tasks. The questions to ask are the following…. Status of the project Request a schedule Request access to the drawings and specifications Request a project directory Primary contacts Discuss commissioning specifications Discuss next step 4.5.2.PLANNING STAGE During this phase, the commissioning plan is developed. This is a very important document for it will become the roadmap for the execution of the commissioning process throughout the life of the process. The following is a representative of the contents of a design phase commissioning plan: purpose of the plan overview of the commissioning process specific objectives ∘ commissioned equipment/systems roles and responsibilities CONTINUATION ∘ Commissioning team members list ∘ commissioning process general rules ∘ commissioning responsibility breakdown commissioning management ∘ information flow ∘ scheduling ∘ site visit protocol ∘ tracking deliverables ∘ deficiency reporting TESTING STRATEGY ∘ Reports/logs ∘ safety and security commissioning process ∘ commissioning timeline ∘ commissioning task overview ∘ design phase tasks ∘ bidding phase tasks ∘ construction phase tasks ∘ occupancy phase tasks 4.5.3.EXECUTION STAGE Safety and security: the safety and security of all commissioning agents on site is to be of the upmost importance to the commissioning project management team. It is the responsibility of the commission project manager to contact the construction manager/general contractor to discuss their safety policies, safety classes/training, and procedures established for the project site. 4.5.3.1.THE MAIN POINTS OF THE SAFETY AND SECURITY POLICY ARE TYPICALLY THE FOLLOWING: Contractor required job hazard analysis documentation wearing the required personal protective equipment adhering to the site employee background check policy adhering to the site employee drug testing policy properly displaying the site identification badge and parking permit following the construction manager/general contractor electrical lock out tag out policy utilization of the proper signage and barricades during testing safeguarding of any keys or access devices given to the commissioning agents participation in all mandatory safety meetings and standdowns submitting the construction manager/general 4.5.4.MONITORING STAGE The project manager is to create a living document that evolves over the course of the project, identifying all of the deliverable submitted, the deliverables that are in progress, and those deliverables pending to be submitted. This document will typically be submitted on a monthly basis to the client/client representative, architect, construction manager/general contractor, and others as directed by the client/client representative. 4.5.5.CLOSEOUT STAGE At the completion of the project, closeout documentation is to be created. There are two documents that are typically produced, the final commissioning report and the systems manual. The contents of the final commissioning report will include the following: Executive summary narrative outstanding issues commissioning activities completed by phase ∘ design phase CONTINUATION ∘ Bidding phase ∘ construction phase ∘ acceptance phase ∘ occupancy phase site visit reports project update reports appendix ∘ appendix to include all completed commissioning task documents 4.6.SYSTEM MANUAL Systems manual: The systems manual is always created when the project is attempting a level of LEED certification that requires the energy and atmosphere (EA) credit 3 enhanced commissioning (EC) requirements. The system can also be required on non leed projects if the owner sees the value in the systems manual and includes the issuance of a systems manual. Continuation The following is a sampling of the documents that are typically found in the systems manual followed by the entity responsible for delivering the documents to the commission agent: Final version of the basis of design (bod)—engineer of record System singleline diagrams—mechanical construction drawing 1- line diagrams Asbuilt sequences of operation control drawings including original set points—controls contractor Operating instruction for integrated building systems— operation and maintenance (O&M) manual issued by the subcontractor. 4.7. LEED-REQUIRED COMMISSIONING TASKS There appears to be a trend toward data centers becoming more energy conscious and sustainable, and thus, LEED certification of data centers is becoming common practice. If the client elects to pursue a LEED certification, there are certain minimum requirements of the commissioning agent. The minimum requirements include the following: the commissioning agent shall have documented commissioning authority in at least two building projects. CONTINUATION The individual serving as the commissioning agent shall be independent of the project design and construction management, though they may be employees of the firms providing those services. The commissioning agent may be a qualified employee or consultant of the owner. The commissioning agent shall report results, findings, and recommendations directly to the owner. For projects smaller than 50,000 ft2, the commissioning agent may include qualified persons on the design and construction team who have required experience 4.8.MINIMUM COMMISSIONING TASKS The following will list those commissioning tasks that are highly recommended Design phase OPR/bod review drawing and specification review of the 75 and 100% construction drawings issuance of a commissioning specification Construction phase issuance of final commissioning plan commissioning kick off meeting submittal reviews development of pfcs, FPT, and IST documents witnessing startup of commissioned equipment 4.9.DATA CENTER TRENDS DATA CENTER PROJECT MANAGEMENT AND COMMISSIONING data center’s functionality and construction like many other types of buildings are an evolutionary process. Some of the data center trends that appear to becoming more common are as follows: Data centers are now going from allowing an opportunity for an orderly shutdown to being operational 100% of the time, creating extra design issues to address with backup capability, continuous power, and redundancy. CONTINUATION Data centers were commonly located within an existing structure where the trend now is toward data centers being more standalone facilities. Since data centers consume a considerable amount of electricity, many are being constructed in remote locations to take advantage of low electrical rates. Data centers are looking to becoming more “green” in attempts to become more energy efficient and sustainable, resulting in seeing innovative use of mechanical system design. DATA CENTER DATACENTER DESIGN & ADMINISTRATION TECHNOLOGY CHAPTER-5 5.1.ANSI/TIA-942: DATA CENTER STANDARDS Data Center growth Drives need for Standards. Over the past several years, more and more businesses have moved to the Web. Corporations continue to recentralize their computing platforms after years of decentralization. Mergers and acquisitions continue to stretch IT departments with systems integration challenges. These trends are driving a renewal in new data center construction and upgrades to existing facilities. 5.2.GOALS OF THE ANSI/TIA-942 The TIA defines the data center as “a building or part of a building dedicated to housing computers and support facilities.” The purpose of the ANSI/TIA-942 standard is to provide a comprehensive understanding of data center design so that the designers can plan the facility, the network and the cabling system. The goal is to provide a tool to get planners thinking about the components of data center design much earlier in the budgeting and development process. ANSI/TIA-942 achieves its goal by going beyond the telecommunications infrastructure to include electrical, architectural,mechanical and security guidelines as well. ANSI/TIA-942 DATA CENTER TOPOLOGY 5.3.VIRTUALIZATION, CLOUD, SDN, AND SDDC IN DATA CENTERS 5.3.1.1.Virtualization in Data Centers: The virtualization is not a new concept for the computing environment. It was largely used in the 1960s for mainframe and forgotten for 20 years. It then resurfaced in early 2000. The paradigm for the virtualization from the mainframe to the server is the same. CONTINUATION Server virtualization is based on the principle of operating simultaneously multiple virtual servers or VMs on a single physical server. Companies can hence operate using virtual servers instead of physical servers with the objective to improve utilization of server polling capacity and consequently reduce the investment required in physical infrastructure. Today, we are able to run between 10 and 40 virtual servers on a single physical server. In the future, it is expected to increase to hundreds of virtual servers. 5.3.1.2.BENEFITS OF VIRTUALIZATION Data center can also benefit from high redundancy and improved network reliability. One benefit is the dynamic fault tolerance against software failures through rapid boot strapping or rebooting against software failures. Data centers inherit improved hardware fault tolerance through the migration of a VM to different hardware. CONTINUATION Another strong advantage is the fact that security is facilitated. Virtualization enables to securely separate virtual operating systems. Today, virtualization has moved from advanced products (such as VMWare) toward a large variety of Open Source solutions like XEN, KVM, etc. 5.3.1.3.VIRTUALIZATION TECHNOLOGY CHALLENGES Networking Challenges The main challenge of virtualization comes from the network that is not easily configurable and yet not completely virtualized. The operational issue encountered is related to the fact that the solutions are not generic, which does not facilitate the implementation of data centers nor allow their expansion. 5.3.1.4.THE MAIN ISSUES OF THE NETWORKING OF THE DATA CENTER Multitenant Support Separation Addressing Topology/Hierarchy Workload Mobility I/O Blocking between VMs over the Blade server 5.3.2CLOUD AS AN EXTENSION OF THE DATA CENTER Cloud can be defined as a computing infrastructure available in the network. Users have access to their information in the network. Cloud is a concept that complements the Internet by a complete availability of computing in the network. Users do not anymore need to have a computing environment. It can be available remotely. CONTINUATION Cloud computing offers three different models or technical use of Cloud Computing: IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS). We will focus the discussion on IaaS services as data centers are based on IaaS. 5.4.IAAS BENEFITS FOR THE DATA CENTER IAAS technology provides companies greater flexibility in the use of their existing resources. IAAS facilitates the use of external resources now called the Public or Private Cloud: it also offers machines or virtual servers that are only used when you really need them. It is at this point that virtualization provides its full benefit because the company uses the resources it really needs, and thus, significant savings can be realized. 5.4.1.IAAS OPERATIONS AND RELATED ISSUES The main challenging IaaS operation is the workload mobility (Live Migration), which consists of moving running VMs between physical servers for load optimization or maintenance purpose. This can be done intra or inter data center. This is a hypervisor-level operation as a service requires the provider to expose an interface in which hypervisors can transfer VMs to other hypervisors on potentially different Clouds. Figure below illustrates the basic operation of VM migration over an hypervisor. VM(VIRTUAL MACHINE) MIGRATION OVER AN HYPERVISOR. 5.5.NETWORKING IN DATA CENTER In today’s data centers, the network remains a fundamental challenge and solutions are required for configurable networks in data centers. This section will define the data center topologies and the evolution of network technologies. 5.6TOPOLOGY OF THE NETWORK DATA CENTER The data center is composed of a few thousand servers, each containing VMs. Building the network for the data centers involves a trade-off between speed, scale, and cost. Connecting more than thousands of VMs requires multiple set of switches and routers organized in a hierarchical topology. 5.7.DATA CENTER NETWORKS ARE COMPOSED OF THREE DISTINCT TIERS 1. In the first tier, the server/storage devices are connected to the Top- of-Rack (TOR) switches.In order to reduce the cabling, each blade server has its own TOR switch where all servers of this blade are connected to the TOR switch. 2.The second tier aggregates access switches to the core of the network. The ESs are interconnected to establish a POD. Those PODS are connected to the aggregate switches (ASs), which are now connected to the core switches/routers (CRs) 3. The third core network tier connects the CRs and allows communications to the outside world. 5.8.MULTITIER VIEW OF THE DATA CENTER NETWORK TOPOLOGY 5.9.FREE COOLING TECHNOLOGIES IN DATA CENTER Using Properties of Ambient Air to Cool a Data Center Major considerations by the design engineers when selecting the cooling system for a specific site are: Cooling system for a specific site are: a. Cold aisle temperature and maximum temperature rise across server rack b. Critical nature of continuous operation for individual servers and peripheral equipment c. Availability of sufficient water for use with evaporative cooling d. Yearly ambient dry-bulb (db) and wet-bulb (wb) temperatures, extremes of db and wb temperatures, and air quality, that is, particulate and gases e. Utility costs 5.10.SMART GRID-RESPONSIVE DATA CENTERS What Are Grid-Responsive Data Centers? DR programs and tariffs are designed to improve grid reliability and decrease electricity use during peak demand periods, which reduces total system costs [6–8]. The term “Smart Grid-Responsive” (or GridResponsive in short) data centers indicates that facilities are not only “self-aware” to meet local needs but also “grid aware” to respond to changing grid conditions (e.g., price or reliability) and gain additional benefits resulting from incentives and credits and/or lowered electricity prices. 5.11.IT INFRASTRUCTURE VIRTUALIZATION TECHNOLOGIES Future data center cost management will rely on reducing IT equipment energy consumption, which, in turn, reduces site infrastructure energy use. Virtualization technologies consolidate and optimize servers, storage, and network devices in real time, reducing energy use by enabling optimal use of existing data center equipment. 5.11.1.VIRTUALIZATION ALLOWS DATA CENTERS TO: Optimize use of existing servers, storage, and network devices based on business needs. Reduce electricity and new hardware/software commissioning costs. Consolidate for improved energy efficiency of IT equipment. Manage bandwidth requirements, power constraints, and time- differentiated rates. 5.12.GRID-DISTRIBUTED DATA CENTERS AND NETWORKS Some data centers maintain fully networked and distributed locations on different electrical grids or geographic locations as backup for disaster recovery. Data centers that participate in DR using this strategy would likely need advance notice of the need for load migration for planning and coordination purposes. 5.12.1.CHALLENGES TO GRID- RESPONSIVE DATA CENTERS The key challenges are as follows: 1. Perception of risk to business and operations. 2. Performance measurement strategies 3. Lack of information REVIEW QUESTIONS 1.What is ANSI/TIA-942 data center standard? 2.Draw a diagram showing the data center topology and list down the goals of ANSI/TIA-942 standard. 3.Explain about the virtualization in data centers along with its benefits and challenges. 4.List down the main issues with the networking of data centers. 5.Write a short note on cloud as an extension of data centers. 6.What are the IAAS benefits for datacenters? 7.Explain about the IAAS operations and related issues with the help of a diagram. 8.What is networking in data center? What is the topology of network data centers? 9.What are the 3 distinct tiers of data center networks? Show the multi tier view of the data center network topology. 10.List down the free cooling technologies. 11.Write a short note on the smart grid-responsive data centers along with its challenges. 12.Explain about the IT infrastructure virtualization technologies. Disaster Recovery and CHAPTER-6 Business Continuity 6.1.WHAT IS DISASTER RECOVERY(DR)? Disaster recovery (DR) is the process, policies, and procedures that are related to preparing for recovery or continuation of technology infrastructure, which are vital to an organization after a natural or human-induced disaster. High availability(HA): is a system design approach and associated service implementation that ensures a prearranged level of operational performance will be met during a contractual measurement period. CONTINUATION DR and HA come with different approaches to architectural data center and system design, different physical and design complexities, and most importantly for many different levels of cost. The critical things that can cause a data centre outage are broadly split into seven key categories—from the physical building itself to the core IT infrastructure: 1. Building 2. Environmental (Power and Cooling) 3. Fire Systems 4. Access Control 5. Security 6. Service Provider Connectivity 7. IT Infrastructure CONTINUATION Ultimately, in the data center, as with these two forms of transport, the approach to and levels of investment in DR and HA planning come down to acceptable business risk. The approach to reducing or removing this risk is then designed into the physical data center, building infrastructure, platforms, systems, processes, and people responses to potential failures. 6.2.THE EVOLUTION OF THE DATA CENTER AND DATA CENTER RISK In the days of the early mainframe, the data center was simply the climate- controlled safe house for the gigantic and power-hungry early computers. The data center and its contents were firmly(strongly) in the hands of a few special souls in the IT department. CONTINUATION The new enterprise architectures that lie within the data center today are inherently more robust and resilient (flexible). Physical resilience is now augmented by virtual resource redundancy, and it is now relatively simple to overcome the operational risk of a specific hardware component outage. CONTINUATION Finally, the word “cloud” has entered the lexicon of the data center. Private clouds, built and operated within the clients’ own data center, offer a new paradigm in data center service provisioning. Promising on-demand, flexible, scalable, and changeable server, storage, and network resources, private clouds combine physical and virtual resources alongside a new and more complex management and operational stack that create new efficiencies and complexities in data center enterprise infrastructure design. 6.3.DATA CENTER WIRING COMPLEXITY In today’s world of DR and HA, assessing, addressing, and coping with a mix of facilities, physical, virtual and now “as a service data” center services is either more complex and more challenging than anything the chief information officer (CIO), and IT department has faced before or much more simple and straightforward than we could have ever imagined; it depends on which perspective you are looking from. DATA CENTER WIRING COMPLEXITY 6.4.DR AND HA STRATEGY TOP 10 DATA CENTER OUTAGES: 6.5.DESIGN FOR BUSINESS CONTINUITY AND DISASTER RECOVERY An ounce of prevention is better than a pound of cure. Data centers infrastructure should be built robustly with consideration of BC and DR requirements that are beyond jurisdictional building codes and standards. The International Building Code (IBC) or California Building Code generally addresses life safety of occupants. CONTINUATION The International Building Code (IBC) or California Building Code generally addresses life safety of occupants. The codes provide little regard to property or functional losses and BC. To sustain data center operations after a natural or man-made disaster, design of a data center must consider system redundancy. Building structural and nonstructural components must be hardened with considerations of appropriate BC. CONTINUATION Electrical power outage is commonly caused by natural disasters. Alternative power source such as proven fuel cell technology could be considered in lieu of power grid. For Internet Service Provider (ISP), ensure multiple Internet connections that include combination of fiber cable, T1 lines, satellite, and DSL to provide baseline service if one or two connections fail. 6.6.NATURAL DISASTERS There are many natural disasters including earthquakes, tsunamis, volcanic eruptions, hurricanes, tornados, wildfire, heat waves, etc. Example: The 2011 Great East Japan Earthquake. Lessons Learned from Japan Earthquake and Tsunami…. 6.7.LET US LOOK AT THE IMPACTS RELATED TO IT-RELATED BUSINESSES: 1. First Chains of command were lost. Almost nobody could decide appropriate measures for IT infrastructure recoveries. Communication channels were lost. It was nearly impossible to get correct information such as “Who is still living?,” “Who is in charge?,” “What happened?,” and “What is the current status?” Stock of fuel for emergency power supply was very limited (1 or 2 days). Server rooms were strictly protected by electronic security systems, so without enough electricity, these security systems became obstacles for emergency responses as the aftershocks kept coming. At some organizations, they kept the door of their server room open. CONTINUATION 2. Second Replacement facilities or equipments (servers, PCs, etc.) were not supplied quickly from the vendors because so many organizations had suffered from the disaster. Backup centers also suffered in the disaster. Therefore, quick recovery was almost impossible at many organizations. Because of rotational blackouts, companies could not access their servers from remote offices. Data recovery was a very heavy task. If a company’s backup rotation was once per week, they lost almost 1 week’s worth of data. In some cases, both electronic and paper-based backups of data were lost. CONTINUATION In the areas evacuated as a result of the nuclear power plant accidents, nobody could enter their own offices. Many IT-related devices were washed away by the tsunami. Some fell into the wrong hands. Fuel supply was interrupted in many areas because transportation routes were not repaired quickly. Many organizations moved their data centers and factories to the western side of Japan. The emergency power supply could not operate for long periods of time. They were designed for short-term operation. CONTINUATION Monthly data processing was impossible in many organizations, resulting in much delay. 3. Third At the western side of Japan, many organizations confronted electricity shortages and could not start full operations. 6.8.NOW THAT WE HAVE A FULL PICTURE OF THE DEVASTATION THAT OCCURRED, LET US LOOK AT THE LESSONS WE’VE LEARNED: 1. Bad situations can continue for a long time. Quick recovery is sometimes impossible. Be prepared for this. 2. Prepare as many people as you possibly can who can respond to disasters. Having a fixed definition of roles and responsibilities may be hazardous. 3. Data encryption is indispensable. 4. A cloud computing-like environment can be very helpful in situations like this. 5. Uncertainty-based risk management is necessary 6.9.EMERGENCY POWER/BACKUP GENERATORS 12 h on-site fuel storage minimum. Regular testing of switchover. Consider multiple fuel delivery contracts with diesel fuel suppliers from diverse locations. Check your backup generators for maintenance requirements and limitations to support 48, 96, or 144 h plans. 6.10.COMMUNICATIONS Harden communication system (equip engine generator at home) with key staff and decision makers who are critical to incident management process. Prearrange full-service office near key staff and decision makers who can’t reach data center. Communicate status to decision makers, staff, customers by email; post status on a website with dashboard. Communication tools include email, direct phone call, website posting, and instant message. Use social media and GPS to communicate and locate employees. Federal Communications Committee reported that 25% of cell phone towers lost power, rendering mobile CONTINUATION phone useless. Diversity of telecoms providers includes 3G/4G and satellite phone. Provision to charge mobile phones. Staff phone line 24/7. Redundancy of voice and data infrastructures. Employees who were unable or unsafe to leave home due to storm damage and lack of transportation or unwilling to leave their families could work through telecommuting systems and can do what need to be done remotely. 6.11.INFORMATION TECHNOLOGY Move IT loads to other sites prior to an event. Regular backup storage and mirroring process may need to be changed due to vulnerable power loss. Move email system, documentation, and storage to an external cloud provider to ensure uninterrupted communication. Moving communication systems to Cloud-based hosting may prove very valuable. Prioritize critical tasks to streamline recovery. Oftentimes, IT securities are more tax during and after a disaster using weak links in security and business. It is important to remain vigilant and apply security best practices during BC/DR process. CONCLUSION There are many BC/DR practices relating to data center and IT areas that could be considered to ensure resilience. These include deploying data in redundant facilities, add-on modular data center containers, co-location, cloud, etc. REVIEW QUESTIONS 1.What is disaster recovery and High availability? 2.List down the critical things that can cause a datacenter outrage? 3.Write a short note on the evolution of data centers and data center risks. 4.What is data center wiring complexity? 5.What is DR and HA strategy? 6.What are top 10 data center outrages? 7.Explain about the design for business continuity and disaster recovery. 8.What are the natural disasters? 9.Explain about the impacts related to IT-Businesses? 10.How to maintain emergency power/backup generators? 11.Explain about the communications. 12.How to protect the Information technology?