Document Details

ProdigiousQuantum

Uploaded by ProdigiousQuantum

null

2021

Tags

computer networks ip addressing networking

Full Transcript

JTO Ph-II DNIT INDEX  JTO PH - 2 (DNIT) Index S. No. Chapter Page No. 1. IP addressing , VLSM & CIDR...

JTO Ph-II DNIT INDEX  JTO PH - 2 (DNIT) Index S. No. Chapter Page No. 1. IP addressing , VLSM & CIDR 2 2. Server implementation (Web, FTP, Database) 12 DBMS (MySql, Oracle) 32 3. 4. Cyber Attacks 45 5. HTML and CSS 56 6. Role of IT in Digital Marketing in present scenario 124 JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 1 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Chapter 1 : IP addressing , VLSM & CIDR 1.1 LEARNING OBJECTIVES The objectives of this chapter is to understand i) Concept of IP Address ii) Special IPv4 Address iii) Class A, B, and C IP addresses iv) Private & Public IP Address v) Concept of subnetting vi) Types of subnetting - FLSM & VLSM vii) Classless Inter-Domain Routing (CIDR) & Supernetting viii) Representation of IPv4 address in CIRD notation 1.2 INTRODUCTION Internet is a dramatically different network than when it was first established in the early 1980s. One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a numeric identifier assigned to each machine on an IP network. It designates the specific location of a device on the network. An IP address is a software address, not a hardware address—the latter is hard-coded on a network interface card (NIC) and used for finding hosts on a local network. IP addressing was designed to allow hosts on one network to communicate with a host on a different network regardless of the type of LANs the hosts are participating in. 1.3CONCEPT OF IP ADDRESS IP is the primary layer 3 protocol in the Internet suite. In addition to internetwork routing, IP provides error reporting and fragmentation and reassembly of information units called datagrams for transmission over networks with different maximum data unit sizes. IP represents the heart of the Internet protocol suite. IP addresses are globally unique, 32-bit numbers. Globally unique addresses permit IP networks anywhere in the world to communicate with each other. An IP address is divided into three parts. The first part designates the network address, the second part designates the subnet address, and the third part designates the host address. In generalized format two parts - network bits & host bits. Every IPv4 address is always coupled with 32 bit subnet mask value by explicit or implicit representation which is used to define the network & host bit boundary of IP address. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 2 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR 32 bits Network Bits Subnet Bits (Optional) Host Bits Figure 1: Three Parts IPv4 Address The IP version 4 (IPv4), defines a 32-bit address which means that there are only 2^32 (4,294,967,296) IPv4 addresses available. This might seem like a large number of addresses, but the finite number of IP addresses will eventually be exhausted. 1.4 DOTTED-DECIMAL NOTATION To make Internet addresses easier for human users to read and write, IP addresses are often expressed as four decimal numbers, each separated by a dot. This format is called "dotted- decimal notation.” Dotted-decimal notation divides the 32-bit Internet address into four 8-bit (byte) fields and specifies the value of each field independently as a decimal number with the fields separated by dots. Figure 1: Notation of IP Address 1.5 SPECIAL ADDRESSES  Network Address Network address is used to uniquely to identify networks. It represents collection of devices (Network) that has the same network bits in their IP address. The host bits of network address contains all 0‟s. Routers maintain these network addresses in their routing table for taking routing decisions. Broadcast Address Broadcast address refers to special address that is used to target all systems on a specific subnet/ network instead of single hosts. In other words broadcast address allows information to be sent to all machines on a given subnet rather than to a specific machine. Broadcast address contains all 1‟s in the host bit places. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 3 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Loop back IP Address The IP address range 127.0.0.0 – 127.255.255.255 is reserved for loopback. Loopback IP address is managed entirely by and within the operating system. These addresses enable the Server and Client processes on a single system to communicate with each other. The loopback address allows for a reliable method of testing the functionality of an Ethernet card and its drivers and software without a physical network. APIPA - Automatic Private IP Addressing It is a feature in operating systems which enables devices to self-configure an IP address and subnet mask automatically when their DHCP (Dynamic Host Configuration Protocol) server isn‟t reachable. The IP address range for APIPA is (169.254.0.1 to 169.254.255.254) 0.0.0.0 Address In the context of servers, 0.0.0.0 address can mean "all IPv4 addresses on the local machine" In the context of network 0.0.0.0/8 refers to current network In the context of routing tables, a network destination of 0.0.0.0 is used with a network mask of 0 to depict the default route as a destination subnet. 255.255.255.255 Address Reserved for the "limited broadcast" destination address 1.6 CLASSFUL NETWORKS In order to provide the flexibility required to support different size networks, earlier the designers decided that the IP address space (0.0.0.0 to 255.255.255.255) should be divided into three different address classes - Class A, Class B, and Class C. This is often referred to as "classful" addressing because the address space is split into three predefined classes, groupings, or categories. Class A networks are intended mainly for use with a few very large networks, because they provide only 8 bits for the network address field. Class B networks allocate 16 bits, and Class C networks allocate 24 bits for the network address field. Class C networks only provide 8 bits for the host field, however, so the number of hosts per network may be a limiting factor. In all three cases, the leftmost bit(s) indicate the network class. Figure.3 below shows the address formats for Class A, B, and C IP networks. Class D addresses are used for multicast purpose where as class E addresses are reserved for research purpose. One of the fundamental features of classful IP addressing is that each address contains a self-encoding key that identifies the dividing point between the network-prefix and the host- number. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 4 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Figure 3 : Classes of IP Address Class A NETWORKS (/8 Prefixes) Each Class A network address has an 8-bit network-prefix with the highest order bit set to 0 and a seven-bit network number, followed by a 24-bit host-number. Today, it is no longer considered 'modern' to refer to a Class A network. Class A networks are now referred to as "/8s" (pronounced "slash eight" or just "eights") since they have an 8-bit network-prefix. A maximum of 126 (2^7 -2) /8 networks can be defined. The calculation requires that the 2 is subtracted because the /8 network 0.0.0.0 is reserved for use as the default route and the /8 network 127.0.0.0 (also written 127/8 or 127.0.0.0/8) has been reserved for the "loopback" function. Each /8 supports a maximum of 16,777,214 (2^24-2) hosts per network. The host calculation requires that 2 subtracted because the all-0s ("this network") and all- 1s ("broadcast") host-numbers may not be assigned to individual hosts. Class B Networks (/16 Prefixes) Each Class B network address has a 16-bit network-prefix with the two highest order bits set to 1-0 and a 14-bit network number, followed by a 16-bit host-number. Class B networks are now referred to as "/16s" since they have a 16-bit network-prefix. A maximum of 16,384 (2^14 ) /16 networks can be defined with up to 65534 (2^16 -2) hosts per network. Class C Networks (/24 Prefixes) Each Class C network address has a 24-bit network-prefix with the three highest order bits set to 1-1-0 and a 21-bit network number, followed by an 8-bit host-number. Class C networks are now referred to as "/24s" since they have a 24-bit network-prefix. A maximum of 2,097,152 (2^21)/24 networks can be defined with up to 254 (2^8-2) hosts per network. The following table gives an overview of this Classful addressing scheme, now obsolete system. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 5 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Leading Number Number of Number Total Bits of addresses Start Class of Host Number of End Address identifying Network per Address bits (H) Networks class Bits (N) network 126 16,777,214 Class A 0 8 24 7 24 0.0.0.0 127.255.255.255 (2 )-2 (2 ) -2 16,384 65,534 Class B 10 16 16 14 16 128.0.0.0 191.255.255.255 (2 ) (2 )-2 2,097,152 254 Class C 110 24 8 21 8 192.0.0.0 223.255.255.255 (2 ) (2 )-2 Class D not not 1110 not defined not defined 224.0.0.0 239.255.255.255 (multicast) defined defined Class E not not 1111 not defined not defined 240.0.0.0 255.255.255.255 (reserved) defined defined Table 1. Overview of this Classful addressing scheme The classful A, B, and C octet boundaries were easy to understand and implement, but they did not foster the efficient allocation of a finite address space. A /24, which supports 254 hosts, is too small while a /16, which supports 65,534 hosts, is too large. In the past, the Internet has assigned sites with several hundred hosts a single /16 address instead of a couple of /24s addresses. Classful network design served its purpose in the startup stage of the Internet, but it lacked scalability in the face of the rapid expansion of the network in the 1990s. The class system of the address space was replaced with Classless Inter-Domain Routing (CIDR) in 1993. CIDR is based on variable-length subnet masking (VLSM) to allow allocation and routing based on arbitrary-length prefixes. 1.7 PRIVATE IP ADDRESS AND PUBLIC IP ADDRESS  Private IP Address and Public IP Address are used to uniquely identify a machine over a Network. Private IP addresses are used within local network which are invalid and non routable in Internet. Public IP address is mostly used outside the local network. Public IP address is provided by ISP, Internet Service Provider. The following table.2 lists the major differences and characteristics of Private & Public IP addresses. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 6 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Key Private IP Address Public IP Address Scope Private IP address scope is local to Public IP address scope is global. present network. Locally unique within Globally unique across Internet. a private network Communication Private IP Address is used to Public IP Address is used to communicate within the network. communicate outside the network, Internet. Provider Local Network administrator assigns ISP, Internet Registries/ Internet Service private IP addresses. There is no Provider control the public IP address owner for private IP addresses. allocation Cost Private IP Addresses are free of cost. Public IP Address comes with a cost. Anybody can use private IP addresses Allotted owners alone can use their without any restrictions. public IP addresses. Range Private IP Address range: Except private IP Addresses, and special IP address, rest IP addresses are public. Class A: 10.0.0.0 – 10.255.255.255, Class B: 172.16.0.0 – 172.31.255.255, Class C: 192.168.0.0 – 192.168.255.255 Table 2. Differences between Private IP Address and Public IP Address. 1.8 SUBNETTING In 1985, RFC 950 defined a standard procedure to support the subnetting, or division, of a single Class A, B, or C network number into smaller pieces. Subnetting was introduced to overcome some of the problems that parts of the Internet were beginning to experience with the classful two-level addressing hierarchy. IP Subnetting is a process of dividing a large IP network in smaller IP networks.  Advantages of Subnetting: Manageable Networks: We can partition a network as group of devices based on their interactions or purpose of those devices. Enhanced security: Access policies can be enforced. Each group/ subnet can be managed efficiently by controlling them what services they can access. Improved routing efficiency: Routing can be normalized by proper planning of subnets and supernets which improves network convergence. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 7 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Improved bandwidth: Reduces the size of the broadcast domain & broadcast messages helping network devices not to waste their resources in listening to unnecessary broadcast messages. Customized network planning: An organization will be assigned with an IP network and the organization can then divide it into subnets to assign a distinct subnetwork number for each of its internal networks. This allows the organization to deploy additional subnets without needing to obtain a new network number from the Internet. Subnetting designates high-order bits from the host as part of the network prefix. This method divides a network into smaller subnets. The default number of network bits will be increased and host bits will be reduced. Subnet mask When subnetting is done the default prefix length/ network bits will be increased and known as extended-network-prefix. The default count of network bits as standardized in Classful addressing is altered. Subnet mask is used to define the boundary between network bits and host bits. Subnet mask is a string of 1‟s followed by string of 0‟s of 32 bits in length. Figure 2: Default Subnet mask of Classful IP Address The 1‟s available in subnet mask identifies network bits and 0‟s identify the host bits defining the boundary of network and host bits in a given 32bit IP address. 1.9 TYPES OF SUBNETTING There are two types of Subnetting FLSM and VLSM. In FLSM, all subnets have equal number of host addresses and use same Subnet mask. In VLSM, subnets have flexible number of host addresses and use different subnet mask. a) Fixed Length Subnet Masking (FLSM) FLSM Subnetting divides a network into smaller subnets of equal size. All these subnets can accommodate equal number of hosts. Wastage of IP address space will be more if this type of subnetting is used. In this case all subnets use same the same subnet mask. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 8 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR b) Variable Length Subnet Masking (VLSM) In production environment, we may need to have subnets of different sizes. If we do subnetting based on FLSM, the subnets will be made considering the maximum subnet size, which would waste IP addresses. VLSM comes to the rescue. VLSMs can use subnet masks with different lengths which avoids IP address wastage considerably. Figure 3: Diagram showing FLSM & VLSM Subnets 1.10 SUPERNETTING - CLASSLESS INTER-DOMAIN ROUTING (CIDR) By 1990‟s, the exponential growth of the Internet was beginning to raise serious concerns among members of the IETF about the ability of the Internet's routing system to scale and support future growth. These problems were related to: The rapid growth in the size of the global Internet's routing tables. The eventual exhaustion of the 32-bit IPv4 address space. Projected Internet growth figures made it clear that the first two problems were likely to become critical. The response to these immediate challenges was the development of the concept of Supernetting or Classless Inter-Domain Routing (CIDR). CIDR eliminates the traditional concept of Class A, Class B, and Class C network addresses and replaces them with the generalized concept of a "network-prefix." In the CIDR model, each piece of routing information is advertised with a bit mask (or prefix-length). The prefix-length is a way of specifying the number of leftmost contiguous bits in the network- portion of each routing table entry. For example, a network with 20 bits of network-number and 12-bits of host-number would be advertised with a 20-bit prefix length (a /20). The IP address advertised with the /20 prefix could be a former Class A, Class B, or Class C. Routers use the network-prefix, rather than the first 3 bits of the IP address, to JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 9 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR determine the dividing point between the network number and the host number. As a result, CIDR supports the deployment of arbitrarily sized networks rather than the standard 8-bit, 16- bit, or 24-bit network numbers associated with classful addressing.  REPRESENTATION OF IPV4 NETWORK IN CIDR NOTATION Figure 4: Diagram representing SNM to CIDR prefix conversion Example: Network 102.168.1.128 with subnet mask 255.255.255.128 can be represented in CIDR notation as 102.168.1.128/25. Procedure: 1. Write the subnet mask as binary: 255.255.255.128  1111 1111. 1111 1111. 1111 1111. 1000 000 2. Count the number of 1’s from the binary subnet mask above 25 ( 25 network bits)  „/25‟ network prefix 3. Write the CIDR notation by mentioning the prefix after network address CIDR representation  102.168.1.128 /25 102.168.1.128 EQUIVALENT TO 102.168.1.128 /25 255.255.255.128  Benefits of CIDR/ Supernetting: CIDR enables the efficient allocation of the IPv4 address space. CIDR supports route aggregation where a single routing table entry can represent the address space of thousands of traditional classful routes. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 10 of 131 For Restricted Circulation JTO Ph-II DNIT IP Addressing : VLSM & CIDR Figure 5: Route summarization by CIDR/ Supernetting CIDR allows a single routing table entry to specify how to route traffic to many individual network addresses by reducing the routing table size and helps control the amount of routing information in the Internet's backbone routers Supernetting reduces route flapping and eases the local administrative burden of updating external routing information. 1.11 CONCLUSION An IP address is an address used in order to uniquely identify a device on an IP network. The address is made up of 32 binary bits, which can be divisible into a network portion and host portion with the help of a subnet mask. The 32 binary bits are broken into four octets (1 octet = 8 bits). Each octet is converted to decimal and separated by a period (dot). JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 11 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) Chapter 2 : Server implementation (Web, FTP, Database) 2.1 Learning Objective This chapter cover the concept of server implementation of servers to host different types of services like Web service, FTP service and Database service. After reading the chapter the learners will get the concept of Web (HTTP), FTP and Database servers. 2.2 What Does Server Mean? A server is a computer, a device or a program that is dedicated to managing network resources. They are called that because they “serve” another computer, device, or program called “client” to which they provide functionality. There are a number of categories of servers, including print servers, file servers, network servers and database servers. In theory, whenever computers share resources with client machines they are considered servers. However, servers are often referred to as dedicated because they carry out hardly any other tasks apart from their server tasks. The purpose of a server is to manage network resources such as hosting websites, transmitting data, sending or receiving emails, controlling accesses, etc. The server is connected to a switch or router used by all the other network computers can use to access the server‟s features and services (browsing websites, checking emails, communicating with other users, etc.). 2.3 Some of the most common types of server include: Database servers They allow other computers to access a database and retrieve or upload data from and into it.  File servers They provide users with access to files and data stored centrally.  Web servers They deliver requested web pages to multiple client web browsers.  Mail servers They are a sort of “virtual post office” that store and sort emails before they are sent to users upon request.  Application servers JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 12 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) They are servers that provide an environment with all the necessary requirements to run or develop an application.  Other types of server include:  Proxy servers  Cloud servers  Policy servers  Blade servers  Print servers  Domain name services Nearly all personal computers are capable of serving as network servers. However, usually software/hardware system dedicated computers have features and configurations optimized just for this task. For example, dedicated servers may have high-performance RAM, a faster processor and several high-capacity hard drives. In addition, dedicated servers may be connected to redundant power supplies, several networks and other servers. Such connection features and configurations are necessary as many client machines and client programs may depend on them to function efficiently, correctly and reliably. For example, servers must be able to stay always on to deliver their services, and they‟re set up with a certain degree of fault tolerance to reduce the risk of causing service issues. In order to operate in the unique network environment where many computers and hardware/software systems are dependent on just one or several server computers, a server often has special characteristics and capabilities, including:  The ability to update hardware and software without a restart or reboot.  Advanced backup capability for frequent backup of critical data.  Advanced networking performance.  Automatic (invisible to the user) data transfer between devices.  High security for resources, data and memory protection. Server computers often have special operating systems not usually found on personal computers. Some operating systems are available in both server and desktop versions and use similar interfaces. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 13 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) However, an increase in the reliability of both server hardware and operating systems has blurred the distinctions between desktop and server operating systems. 2.4 Client/Server Architecture  What Does Client/Server Architecture Mean? Client/server architecture is a computing model in which multiple components work in strictly defined roles to communicate. The server hosts, delivers and manages most of the resources and services to be consumed by the client. This type of shared resources architecture has one or more client computers connected to a central server over a network or internet connection. Client/server architecture is also known as a networking computing model or client/server network because all the requests and services are delivered over a network. It‟s considered a form of distributed computing system because the components are doing their work independently of one another. In a client/server architecture, the server acts as the producer and the client acts as a consumer. The server houses and provides high-end, computing-intensive services to the client on demand. These services can include application access, storage, file sharing, printer access and/or direct access to the server‟s raw computing power. Client/server architecture works when the client computer sends a resource or process request to the server over the network connection, which is then processed and delivered to the client. A server computer can manage several clients simultaneously, whereas one client can be connected to several servers at a time, each providing a different set of services. The client/server model as it evolved served pretty well for what some refer to as web 2.0, where the Internet slowly became a functional virtual space for users. It provided an established and predictable model for how user sessions would go, and how providers delivered resources based on requests for data packets and other resources. Example of Client/Server Communications Here's an example of how client/server communications work. In an average use of a browser to access a server-side website, the user or client enters the URL. The DNS server looks up the web server's IP address, and gives it to the browser. The browser generates an HTTP or HTTPS request, and the server, as the producer, sends the files. The client, as the consumer, receives them, and then, typically, sends follow-up requests. Although this model technically works for any number of similar processes, it does have some drawbacks. Over time, an alternative called peer-to-peer or P2P modeling has emerged, which many feel is in some ways superior to traditional client/server models, especially in terms of handing handling specific challenges where communications are more evolved.  Issues with Client/Server Models JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 14 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) One of the biggest issues with a traditional client/server setup is the nature of unpredictable workloads. In defining client/server systems as systems that scale vertically and use central data stores, some analysts believe that peer-to-peer is more agile and versatile for making sure that unpredictable workloads are managed well. Experts talk about things like redundancy and availability zones and failover as a means to keep online business systems running smoothly, despite changes in demand or other problems. For example, another major issue is the utility of a distributed denial of service (DDoS) attack. In this type of attack, out-of-control client activity swamps a server. Those who are looking at the Internet of a couple of decades ago point out that it was fairly easy to swamp a site with a DDoS attack because the average client/server model wasn't set up for thresholds above a certain amount of traffic. Peer-to-peer systems can solve many of those problems, and secure systems against DDoS attacks and similar cyber attacks. Peer-to-peer is also helpful in handling some kinds of other disruptions based on a single point of failure. With the emergence of decentralized and distributed systems, for example, blockchain immutable ledger technologies, peer-to-peer systems are becoming more popular and starting to replace client/server architectures. 2.5 Two-Tier Architecture What Does Two-Tier Architecture Mean? A two-tier architecture is a software architecture in which a presentation layer or interface runs on a client, and a data layer or data structure gets stored on a server. Separating these two components into different locations represents a two-tier architecture, as opposed to a single-tier architecture. Other kinds of multi-tier architectures add additional layers in distributed software design. Experts often contrast a two-tier architecture to a three-tier architecture, where a third application or business layer is added that acts as an intermediary between the client or presentation layer and the data layer. This can increase the performance of the system and help with scalability. It can also eliminate many kinds of problems with confusion, which can be caused by multi-user access in two-tier architectures. However, the advanced complexity of three-tier architecture may mean more cost and effort. An additional note on two-tier architecture is that the word "tier" commonly refers to splitting the two software layers onto two different physical pieces of hardware. Multi-layer programs can be built on one tier, but because of operational preferences, many two-tier architectures use a computer for the first tier and a server for the second tier. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 15 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) 2.6 Three-Tier Architecture A three-tier client/server is a type of multi-tier computing architecture in which an entire application is distributed across three different computing layers or tiers. It divides the presentation, application logic and data processing layers across client and server devices. It is an example of three-tier application architecture. A three-tier client/server adds an additional layer/tier to the client/server-based two-tier models. This additional layer is a server tier that acts as an intermediary or middleware appliance. In a typical implementation scenario, the client or first tier holds the application presentation/interface and broadcasts all of its application-specific requests to the middleware tier server. The middleware or second tier calls the application logic server or third tier for application logic. The distribution of the entire application logic across three tiers helps optimize the overall application access and layer/tier level development and management. 2.7 HTTP Server A web server is software and hardware that uses HTTP (Hypertext Transfer Protocol) and other protocols to respond to client requests made over the World Wide Web. The main job of a web server is to display website content through storing, processing and delivering webpages to users. Besides HTTP, web servers also support SMTP (Simple Mail Transfer Protocol) and FTP (File Transfer Protocol), used for email, file transfer and storage. Web server hardware is connected to the internet and allows data to be exchanged with other connected devices, while web server software controls how a user accesses hosted files. The web server process is an example of the client/server model. All computers that host websites must have web server software. Web servers are used in web hosting, or the hosting of data for websites and web-based applications -- or web applications. How do web servers work? Web server software is accessed through the domain names of websites and ensures the delivery of the site's content to the requesting user. The software side is also comprised of several components, with at least an HTTP server. The HTTP server is able to understand HTTP and URLs. As hardware, a web server is a computer that stores web server software and other files related to a website, such as HTML documents, images and JavaScript files. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 16 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) When a web browser, like Google Chrome or Firefox, needs a file that's hosted on a web server, the browser will request the file by HTTP. When the request is received by the web server, the HTTP server will accept the request, find the content and send it back to the browser through HTTP. More specifically, when a browser requests a page from a web server, the process will follow a series of steps. First, a person will specify a URL in a web browser's address bar. The web browser will then obtain the IP address of the domain name -- either translating the URL through DNS (Domain Name System) or by searching in its cache. This will bring the browser to a web server. The browser will then request the specific file from the web server by an HTTP request. The web server will respond, sending the browser the requested page, again, through HTTP. If the requested page does not exist or if something goes wrong, the web server will respond with an error message. The browser will then be able to display the webpage. Multiple domains also can be hosted on one web server. Examples of web server uses Web servers often come as part of a larger package of internet- and intranet-related programs that are used for:  sending and receiving emails;  downloading requests for File Transfer Protocol (FTP) files; and  building and publishing webpages. Many basic web servers will also support server-side scripting, which is used to employ scripts on a web server that can customize the response to the client. Server-side scripting runs on the server machine and typically has a broad feature set, which includes database access. The server-side scripting process will also use Active Server Pages (ASP), Hypertext Preprocessor (PHP) and other scripting languages. This process also allows HTML documents to be created dynamically. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 17 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) 2.8 Dynamic vs. static web servers A web server can be used to serve either static or dynamic content. Static refers to the content being shown as is, while dynamic content can be updated and changed. A static web server will consist of a computer and HTTP software. It is considered static because the sever will send hosted files as is to a browser. Dynamic web browsers will consist of a web server and other software such as an application server and database. It is considered dynamic because the application server can be used to update any hosted files before they are sent to a browser. The web server can generate content when it is requested from the database. Though this process is more flexible, it is also more complicated. 2.9 Common and top web server software on the market There are a number of common web servers available, some including:  Apache HTTP Server. Developed by Apache Software Foundation, it is a free and open source web server for Windows, Mac OS X, Unix, Linux, Solaris and other operating systems; it needs the Apache license.  Microsoft Internet Information Services (IIS). Developed by Microsoft for Microsoft platforms; it is not open sourced, but widely used.  Nginx. A popular open source web server for administrators because of its light resource utilization and scalability. It can handle many concurrent sessions due to its event-driven architecture. Nginx also can be used as a proxy server and load balancer.  Lighttpd. A free web server that comes with the FreeBSD operating system. It is seen as fast and secure, while consuming less CPU power.  Sun Java System Web Server. A free web server from Sun Microsystems that can run on Windows, Linux and Unix. It is well-equipped to handle medium to large websites. Leading web servers include Apache, Microsoft's Internet Information Services (IIS) and Nginx - - pronounced engine X. Other web servers include Novell's NetWare server, Google Web Server (GWS) and IBM's family of Domino servers. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 18 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) Considerations in choosing a web server include how well it works with the operating system and other servers; its ability to handle server-side programming; security characteristics; and the publishing, search engine and site-building tools that come with it. Web servers may also have different configurations and set default values. To create high performance, a web server, high throughput and low latency will help. 2.10 Web server security practices There are plenty of security practices individuals can set around web server use that can make for a safer experience. A few example security practices can include processes like:  a reverse proxy, which is designed to hide an internal server and act as an intermediary for traffic originating on an internal server;  access restriction through processes such as limiting the web host's access to infrastructure machines or using Secure Socket Shell (SSH);  keeping web servers patched and up to date to help ensure the web server isn't susceptible to vulnerabilities;  network monitoring to make sure there isn't any or unauthorized activity; and  using a firewall and SSL as firewalls can monitor HTTP traffic while having a Secure Sockets Layer (SSL) can help keep data secure. 2.11 How to encrypt and secure a website using HTTPS The web is moving to HTTPS. Find out how to encrypt websites using HTTPS to stop eavesdroppers from snooping around sensitive and restricted web data. Encrypting web content is nothing new: It's been nearly 20 years since the publication of the specification for encrypting web content by running HTTP over the Transport Layer Security protocol. However, running a secure encrypted web server has gone from an option to a virtual necessity in recent years. Attackers continue to seek -- and find -- ways to steal information sent between users and web services, often by tapping into unencrypted content being sent over the Hypertext Transfer JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 19 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) Protocol. Even for mundane, untargeted web content, securing a website with encryption is crucial, as the top browsers now flag unencrypted websites as potentially dangerous. While HTTPS website encryption is a requirement for assuring data integrity between browsers and servers, it is also increasingly a prerequisite for new browser functionality. Learning how to encrypt a website by enabling HTTPS is mandatory, especially for enterprises that want to provide users with a safe and secure web experience.  What is HTTPS? HTTP transfers data as plain text between the client and server. Therefore, anyone who has access to any network segment between you and the server -- on your network, on the server's network or any place in between -- is able to view the contents of your web surfing. Use HTTPS to protect data relating to financial transactions, personally identifiable information or any other sensitive data, as well as to avoid having browsers flag your site as insecure. HTTPS enables website encryption by running HTTP over the Transport Layer Security (TLS) protocol. Even though the SSL protocol was replaced 20 years ago by TLS, these certificates are still often referred to as SSL certificates. Here's a simplified view of how it works: 1. You start your web browser and request a secure page by using the https:// prefix on the URL. 2. Your web browser contacts the web server on the HTTPS port -- TCP port 443 -- and requests a secure connection. 3. The server responds with a copy of its SSL certificate. 4. Your web browser uses the certificate to verify the identity of the remote server and extract the remote server's public key. 5. Your web browser creates a session key, encrypts it with the server's public key and sends the encrypted key to the server. 6. The server uses its private key to decrypt the session key. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 20 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) 7. The client and server use the session key to encrypt all further communications. While HTTPS sessions can be reliably considered secure from eavesdropping attacks, HTTPS by itself does not protect against any other types of attack. Site administrators must still take an active role in preventing and mitigating cross-site scripting, injection and many other attacks that target application or other website vulnerabilities. Figure 6: Encryption Operation Encrypting website data relies on cryptographic algorithms and keys.  How to encrypt a website with HTTPS The keys to encrypting a website reside, literally, in the web server. To enable a web server to encrypt all content that it sends, a public key certificate must be installed. The details of installing an SSL certificate and enabling a web server to use it for HTTPS encryption vary depending on which web server software is being used. But, in almost all cases, the process broadly encompasses these steps: 1. Identify all web servers and services that need to be encrypted. Servers may be hosted in the cloud, on premises, or at an internet service provider or other service provider. A single certificate could be used on multiple servers, but doing so can be risky: If the certificate is breached on one server, the attacker would be able to exploit the certificate on any other JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 21 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) servers that use it. Best practice would be to get a separate certificate for each server or service. 2. Get certificates for web servers and services that need them. Certificates can be purchased from a commercial certificate authority, or they may be acquired at no cost from Let's Encrypt, a free, open source and automated CA. Some commercial CAs also offer bare-bones or trial certificates at no charge, and some hosting service providers will provide certificates for customers whose websites are hosted on shared servers. 3. Configure the web server to use HTTPS, rather than HTTP. The web server configuration process includes installing the SSL certificate, turning on support for HTTPS and configuring encryption options for HTTPS. The configuration process will vary depending on whether the server is hosted in the cloud or on premises and which web server software is in use. 4. Administer and manage certificates. Ongoing administration and quality control over encrypted websites is critical. Qualys, a cloud security provider based in Foster City, Calif., offers an SSL server test page that can help. Certificates are issued with limited lifetimes that are typically one year and never for longer than 27 months. So, system administrators should regularly test and verify that certificates are valid and flag any that are nearing their end-of- life dates. Similarly, periodic testing should also be carried out to verify that servers are responding properly to valid requests and are fully protecting all data transmissions. Installing a digital certificate and providing users with the ability to make HTTPS connections to your web server is one of the simplest ways you can add security to your website and build user confidence when conducting transactions with you over the web. It eliminates "site not secure" messages from web browsers and ensures communications are not subject to eavesdropping on the internet. 2.12 What Does FTP Server Mean? The primary purpose of an FTP server is to allow users to upload and download files. An FTP server is a computer that has a file transfer protocol (FTP) address and is dedicated to receiving an FTP connection. FTP is a protocol used to transfer files via the internet between a server (sender) and a client (receiver). An FTP server is a computer that offers files available for download via an FTP protocol, and it is a common solution used to facilitate remote data sharing between computers. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 22 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) An FTP server is an important component in FTP architecture and helps in exchanging files over the internet. The files are generally uploaded to the server from a personal computer or other removable hard drives (such as a USB flash drive) and then sent from the server to a remote client via the FTP protocol. An FTP server needs a TCP/IP network to function and is dependent on the use of dedicated servers with one or more FTP clients. In order to ensure that connections can be established at all times from the clients, an FTP server is usually switched on; up and running 24/7.  An FTP server is also known as an FTP site or FTP host. Although the FTP server actually sends files over the internet, it generally acts as the midpoint between the real sender of a file and its recipient. The recipient must access the server address, which can either be a URL (e.g., ftp://exampleserver.net) or as a numeric address (usually the IP address of the server). All file transfer protocol site addresses begin with ftp://. FTP servers usually listen for client connections on port 21 since the FTP protocol generally uses this port as its principle route of communication. FTP runs on two different Transmission Control Protocol ports: 20 and 21. FTP ports 20 and 21 must both be open on the network for successful file transfers. The FTP server allows the downloading and uploading of files. The FTP server‟s administrator can restrict access for downloading different files and from different folders residing in the FTP server. Files residing in FTP servers can be retrieved by common web browsers, but they may not support protocol extensions like FTPS. With an FTP connection, it is possible to resume an interrupted download that was not successfully completed; in other words, checkpoint restart support is provided. For the client to establish a connection to the FTP server, the username and password are sent using USER and PASS commands. Once accepted by the FTP server, an acknowledgment is sent to the client and the session can start. Failure to open both ports 20 & 21 prevents the full back-and-forth transfer from being made. The FTP server can provide connection to users without login credentials; however, the FTP server can authorize these to have only limited access. FTP servers can also provide anonymous access. This access allows users to download files from the servers anonymously but prohibits uploading files to FTP servers. Beyond routine file transfer operations, FTP servers are also used for offsite backup of critical data. FTP servers are quite inexpensive solutions for both data transfer and backup operations, especially if security is not a concern. However, when simple login and authentication features are not sufficient to guarantee an adequate degree of security (such as when transferring sensitive or confidential information), two secure file transfer protocol alternatives, SFTP and FTP/S, are also available. These secure FTP server options offer additional levels of security such as data encryption. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 23 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) 2.13 Secure FTP Server (SFTP Server) What Does Secure FTP Server (SFTP Server) Mean? A secure FTP server helps users with transferring files over secure file transfer protocols such as SSH File Transfer Protocol or FTP with SSL/TLS. The transfers can be achieved through server-to-server or client-to-server configurations. A secure FTP server helps enterprises in sending confidential files securely over the internet or insecure networks. A secure FTP server needs an SSH client for communication. A secure FTP server supports many actions on files such as file transfers comprised of multiple files, remote file management activities, creations of directories and deletions related to directories and directory listings. A secure FTP server also makes use of protocols to provide security features such as authentication, encryption or data integrity, password management and access control mechanisms. Certain advanced secure FTP servers such as JSCAPE MFT server often provides both SFTP and FTPS protocols along with other file transfer protocols. There are benefits associated with a secure FTP server. It can detect files which are subjected to unauthorized changes, and as such provides greater data integrity. It also capable of preventing malicious users from impersonating legitimate uses to gain access to files. A secure FTP server helps to keep the file contents secure during transmission. It maintains high access control, meaning only authorized users can access the files. It provides a data-at-rest encryption feature which helps to keep the file contents secure during storage. A secure FTP server is also capable of recording file transfer events; this helps in audits/compliance or to support troubleshooting. One of the other advantages of secure FTP server is its capability to automatically detect sensitive data like cardholder data and ePHI, and its ability to help in password management. File Transfer Protocol (FTP) Example FTP software is relatively straightforward to setup. FileZilla is a free, downloadable FTP client. Type in the address of the server you wish to access, the port, and the password for accessing the server. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 24 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) Figure 7: FileZilla Once access has been granted, the user's files on their local system as well as the accessed server will be visible. The user can download files from the server to the local system, or upload files from the local system to the server. They can also, with the proper authorization, make changes to files on the server. 2.14 What is Database Server Definition – Database server refers to combination of hardware and software where they are used to run the database, as per the context. As software, a database server works as back end portion for database applications that is following old client server model. This back-end portion is known as “Instance”. Database server likes as dedicated server that helps to offer database services, and this type of server run database management software. Database server works similar to client-server network model because it delivers all information which is sought by client systems. Many large scale organizations use the database servers because they need lot of data regular basis. If those organizations implement client server architecture where all clients require process data with frequently, then database server associated with it, and they work together with more efficiently. Few companies hire file server for storing and process data, then database server is best option compare to file server. 2.15 Types of Database Server There are different types of database server, and they are also known as “Database Management System” or “Database Server Software”. Below explain each one: JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 25 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) Figure 8: Types of Database Server 2.16 Examples of Database Server  SAP HANA It is developed by SAP SE, and it has ability to manage SAP and non SAP data. It can support OLTP, OLAP, and SQL. It can interface with large number of other types of applications.  DB2 This is designed by IBM,m and it has NoSQL abilities. It is capable to read JSON and XML types of files. Primary objective of designing of this database system, to be used on IBM‟s iSeries server, as well as it can also support to Linux, UNIX, and Windows platforms.  SolarWinds Database Performance Analyzer This is best database management software tool because it helps to perform of SQL query performance in analysis, tuning, and monitoring. It can also support to cross platform like as UNIX, Linux, and Windows.  Oracle Oracle is most popular database that is used as object relational database management software, and it has latest version is 12c (12 Cloud Computing). It can also support to many Windows, Linux and UNIX versions.  IBM DB2 JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 26 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) IBM DB2 introduced in 1983 by IBM, and C, C++, and Assembly languages are used for designing it. Its installation and set up task is very easy, and data can be also accessed very easy. So it helps to save enlarge amount of data approximate up to peta bytes.  Altibase Altibase is an hybrid DBMS, relational open source database and higher performance, so it is used mostly in the high grade enterprise organizations. Till now, Altibase database has been covered approximate 700 enterprise clients along with 8 Fortune Global 500 organizations in several sectors. It offers data processing with highly intensity from memory database region, and it also contains enlarge storage capacity in disk database region.  Microsoft SQL Server This server was introduced in 1989, and its latest update was released in 2016. Several languages are used for writing it like as Assembly, C, Linux, C++. It can support of Linux and Windows operating systems. It allows various users to use same database at once.  SAP Sybase ASE ASE stands for “Adaptive Server Enterprise”, and it has latest update version is 15.7. It has ability to perform millions of transactions in second, and with using of cloud computing all mobile devices can be synchronized along with this database.  Teradata Teradata database was developed in 1979, and it supports Windows and Linux operating systems. Data expose and impose can be done very easy as well as multiple processing also can be done at once. It is comfortable for enlarge database.  ADABAS ADABAS stands for “Adaptable Database System”, and it has higher data processing speed, result of all transaction is more reliable.  MySQL MySQL is getting more popularity for different web based applications. It is available in freeware and paid version. It can be run on Linux and Windows operating system.  Features are: JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 27 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) Offering much functionality in free version database It has several user interfaces which can be implemented. It supports to other database systems like as DB2 and Oracle.  FileMaker FileMaker database server can work on several types of operating systems like as Windows, Linux, Unix, Mac etc. It has latest update version is 15.0.3. This database server is capable to make connection along with different platforms such as connections with SQL. Due to cloud system, its information can be shared easily.  Microsoft Access It depends only Microsoft Windows, and it has latest updated version is 16.0.4229.1024. This database management system is cost effective, so it is used for E-Commerce websites.  Informix This database management system is introduced by IBM, and it is written by C, C++, and Assembly languages. It has latest updated version is 12.10.xC7. Its hardware does not need more space and maintenance time, as well as containing data every time.  SQLite SQLite is open source database management tool, and it is written by C language. This database system is implemented for mobile devices. It can support to Mac, Windows, and Linux operating system. It is comfortable for storing small to medium size data of websites, and it needs only less space.  PostgreSQL This is advance object relational database management system, and it is also available as open- source tool. It works on Windows and Linux operating system. It has current updated version is 9.6.2. It has great data security, and fastest data retrieval process. It offers fastest data sharing through dashboards.  Amazon RDS JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 28 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) It is also known as “Amazon Relational Database Service”. It is more protective database system, and its configuration setup and using is very easy task. It contains backing up and recovery of data feature inbuilt.  MongoDB This database system is available in free and commercial version. It is developed for those applications, which use both structured and unstructured data. This engine can support both JSON and NoSQL documents. It has latest updated version MongoDB 3.2.  Redis Redis database system is an open source tool with BDS licensed. Its latest updated release is 3.2.8. It can support to Linux and Windows operating system. This is written by ANSI C language. It supports hashes and strings data types, and its database speed and queries performance is great.  CouchDB CouchDB is availabe in both version free and paid, and it is coded in Erlang language. Its latest updated version is 2.0.0. It can also support both Linux and Windows operating system. It is secure system network, and efficient error handling.  Neo4j Neo4j contains enlarge capacity server, and it stores all data in the graphical form. So it is known as “Graph Database Management System”. Its latest updated release is 3.1.0, and it is written by Java language. This database system can run on Linux, UNIX, and Windows operating system.  OrientDB It is also known as “Graph Database Management System” because it stores all data in graphical form. Its current stable release id 2.2.17, and it is written by Java language. It is used mostly real-time web base applications over big data market. It can support to Linux and Windows operating system.  Couchbase Couchbase is an open source database management system, and it is coded with C++, C, Erland languages. Its current updated stable release is 4.5. It supports to Linux and Windows platforms.  Toad JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 29 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) It is easy to set up and use, and it helps to produce result with highly efficient as well as its enlarge amount of data can be exported in several formats, does not need more time for its maintenance.  phpMyAdmin This database server is available as open-source, and it contains user friendly interface. It has currently stable updated version 4.6.6. This database server is written by XHTML, JavaScript, and PHP languages. It is capable to support of Linux and Windows operating systems. Its data can be exported in different file formats like as CSV, SQL, and XML. It has also ability to import data into both file formats (CSV and SQL).  SQL Developer It is also open-source database server. It has ability to execute all queries into different formats like as HTML, PDF, XML, and Excel. It has latest updated version 4.1.5.21.78. This system is written by Java language. It can run on both platforms like as Linux and Windows.  Sequel Pro This database system is used for Mac database. It is easy to operate and works along with MySQL database. Easy to connectivity and more flexible. Installation is easy and quick.  Robomongo This database server is also free and open-source and it can support to both platforms Linux and Windows. It can bear enlarge quantity of load as well as getting great error handling system.  DbVisualizer It has user friendly interface, and easy to set up as well as installation. It also offers better facility for exporting data in CSV format. User has option to scroll down to view the result, if any time it produces enlarges number of rows for retrieving.  Hadoop HDFS This database system offers to enlarge data storage and uses several machines for storing data. Easy to access of data, and due to data redundancy, it prevents data loss. Parallel processing of data and data authentication is also available.  Cloudera JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 30 of 131 For Restricted Circulation JTO Ph-II DNIT Server implementation (Web, FTP, Database) It has higher speed of data processing so mostly large enterprises use this database system. It offers high level security and better efficiency for enlarge amount of data.  MariaDB This database management system is also available in both versions free and paid. The database engine allows you to choose from a variety of storage engines, and it makes great use of resources via an optimizer that increases query performance and processing. It can run on Windows, Linux, UNIX, and Mac operating system. It offers multi-core support, multiple threads, internet protocol, and real-time database access.  Informix Dynamic Server It can support Windows, Linux, UNIX, and Mac operating system. It offers multi-core support, multiple threads, internet protocol, and real-time database access.  4D 4D stands for “4th Dimension”, and it also supports Mac and Windows platform. It has ability for importing and exporting data and it also offers drag and drop facility. 2.17 Function and Working of Database Server Database server is a high performing computer system that helps to provide another computer along with services related for accessing as well as retrieving data from database side. “Front End” is run on the local machines which are operated by users for getting access permission to database server, and “Back End” is run on database server itself that is accessed by remote shell. After retrieving information from database, it is outputted to user requesting data. 2.18 Conclusion The HTTP,FTP and Database servers are playing the key role in providing essential network based services, and nobody can imagine the internet without these services. These servers are fulfilling the high performance network based requirement and generates uncountable responses for the clients. The better understanding of these services will provide the seemless services to the customers. JTO Ph-II Week 2 Version 3.0 Aug 2021 Page 31 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS CHAPTER 3 : DATABASE MANAGEMENT SYSTEM 3.1 OBJECTIVE The objectives of this chapter is to understand  DBMS Concept and Characteristics  DBMS Architecture  DBMS Data models, instance and schemas  RDBMS Concept  Difference between DBMS and RDBMS 3.2 DBMS OVERVIEW Database is a collection of related data and data is a collection of facts and figures that can be processed to produce information. Mostly data represents recordable facts. Data aids in producing information, which is based on facts. For example, if we have data about marks obtained by all students, we can then conclude about toppers and average marks. A database management system stores data in such a way that it becomes easier to retrieve, manipulate, and produce information. 3.3 CHARACTERISTICS A modern DBMS has the following characteristics −  Real-world entity − A modern DBMS is more realistic and uses real-world entities to design its architecture. It uses the behavior and attributes too. For example, a school database may use students as an entity and their age as an attribute.  Relation-based tables − DBMS allows entities and relations among them to form tables. A user can understand the architecture of a database just by looking at the table names.  Isolation of data and application − A database system is entirely different than its data. A database is an active entity, whereas data is said to be passive, on which the database works and organizes. DBMS also stores metadata, which is data about data, to ease its own process.  Less redundancy − DBMS follows the rules of normalization, which splits a relation when any of its attributes is having redundancy in values. Normalization is a mathematically rich and scientific process that reduces data redundancy. JTO PH-II IT Version Page 32 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS  Consistency − Consistency is a state where every relation in a database remains consistent. There exist methods and techniques, which can detect attempt of leaving database in inconsistent state. A DBMS can provide greater consistency as compared to earlier forms of data storing applications like file-processing systems.  Query Language − DBMS is equipped with query language, which makes it more efficient to retrieve and manipulate data. A user can apply as many and as different filtering options as required to retrieve a set of data. Traditionally it was not possible where file-processing system was used.  ACID Properties–DBMS follows the concepts of Atomicity, Consistency, Isolation, and Durability (normally shortened as ACID). These concepts are applied on transactions, which manipulate data in a database. ACID properties help the database stay healthy in multi-transactional environments and in case of failure.  Multiuser and Concurrent Access − DBMS supports multi-user environment and allows them to access and manipulate data in parallel. Though there are restrictions on transactions when users attempt to handle the same data item, but users are always unaware of them.  Multiple views − DBMS offers multiple views for different users. A user who is in the Sales department will have a different view of database than a person working in the Production department. This feature enables the users to have a concentrate view of the database according to their requirements.  Security − Features like multiple views offer security to some extent where users are unable to access data of other users and departments. DBMS offers methods to impose constraints while entering data into the database and retrieving the same at a later stage. DBMS offers many different levels of security features, which enables multiple users to have different views with different features. For example, a user in the Sales department cannot see the data that belongs to the Purchase department. Additionally, it can also be managed how much data of the Sales department should be displayed to the user. Since a DBMS is not saved on the disk as traditional file systems, it is very hard for miscreants to break the code. 3.4USERS A typical DBMS has users with different rights and permissions who use it for different purposes. Some users retrieve data and some back it up. The users of a DBMS can be broadly categorized as follows − JTO PH-II IT Version Page 33 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS Figure 9: DBMS USERS  Administrators − Administrators maintain the DBMS and are responsible for administrating the database. They are responsible to look after its usage and by whom it should be used. They create access profiles for users and apply limitations to maintain isolation and force security. Administrators also look after DBMS resources like system license, required tools, and other software and hardware related maintenance.  Designers − Designers are the group of people who actually work on the designing part of the database. They keep a close watch on what data should be kept and in what format. They identify and design the whole set of entities, relations, constraints, and views.  End Users − End users are those who actually reap the benefits of having a DBMS. End users can range from simple viewers who pay attention to the logs or market rates to sophisticated users such as business analysts. 3.5DBMS ARCHITECTURE The DBMS design depends upon its architecture. The basic client/server architecture is used to deal with a large number of PCs, web servers, database servers and other components that are connected with networks. The client/server architecture consists of many PCs and a workstation which are connected via the network. DBMS architecture depends upon how users are connected to the database to get their request done.  TYPES OF DBMS ARCHITECTURE Database architecture can be seen as a single tier or multi-tier. But logically, database architecture is of two types like: 2-tier architecture and 3-tier architecture. Figure 10: Types of DBMS Architecture JTO PH-II IT Version Page 34 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS  1-Tier Architecture  In this architecture, the database is directly available to the user. It means the user can directly sit on the DBMS and uses it.  Any changes done here will directly be done on the database itself. It doesn't provide a handy tool for end users.  The 1-Tier architecture is used for development of the local application, where programmers can directly communicate with the database for the quick response.  2-Tier Architecture  The 2-Tier architecture is same as basic client-server. In the two-tier architecture, applications on the client end can directly communicate with the database at the server side. For this interaction, API's like: ODBC, JDBC are used.  The user interfaces and application programs are run on the client-side.  The server side is responsible to provide the functionalities like: query processing and transaction management.  To communicate with the DBMS, client-side application establishes a connection with the server side. Figure 11: 2 Tier Architecture  3-Tier Architecture  The 3-Tier architecture contains another layer between the client and server. In this architecture, client can't directly communicate with the server.  The application on the client-end interacts with an application server which further communicates with the database system.  End user has no idea about the existence of the database beyond the application server. The database also has no idea about any other user beyond the application.  The 3-Tier architecture is used in case of large web application. JTO PH-II IT Version Page 35 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS Figure 12: 3 Tier Architecture 3.6DBMS DATA MODELS Data models define how the logical structure of a database is modeled. Data Models are fundamental entities to introduce abstraction in a DBMS. Data models define how data is connected to each other and how they are processed and stored inside the system.  Entity-Relationship Model Entity-Relationship (ER) Model is based on the notion of real-world entities and relationships among them. While formulating real-world scenario into the database model, the ER Model creates entity set, relationship set, general attributes and constraints. ER Model is best used for the conceptual design of a database. ER Model is based on −  Entities and their attributes.  Relationships among entities. These concepts are explained below. JTO PH-II IT Version Page 36 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS Figure 13: ER Model  Entity − An entity in an ER Model is a real-world entity having properties called attributes. Every attribute is defined by its set of values called domain. For example, in a school database, a student is considered as an entity. Student has various attributes like name, age, class, etc.  Relationship − The logical association among entities is called relationship. Relationships are mapped with entities in various ways. Mapping cardinalities define the number of association between two entities. Mapping cardinalities − o one to one o one to many o many to one o many to many  Relational Model The most popular data model in DBMS is the Relational Model. It is more scientific a model than others. This model is based on first-order predicate logic and defines a table as an n- ary relation. Figure 14: Relational Model The main highlights of this model are −  Data is stored in tables called relations.  Relations can be normalized.  In normalized relations, values saved are atomic values.  Each row in a relation contains a unique value. JTO PH-II IT Version Page 37 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS  Each column in a relation contains values from a same domain. 3.7DATABASE SCHEMA A database schema is the skeleton structure that represents the logical view of the entire database. It defines how the data is organized and how the relations among them are associated. It formulates all the constraints that are to be applied on the data. A database schema defines its entities and the relationship among them. It contains a descriptive detail of the database, which can be depicted by means of schema diagrams. It‟s the database designers who design the schema to help programmers understand the database and make it useful. A database schema can be divided broadly into two categories − Physical Database Schema − This schema pertains to the actual storage of data and its form of storage like files, indices, etc. It defines how the data will be stored in a secondary storage. Logical Database Schema − This schema defines all the logical constraints that need to be applied on the data stored. It defines tables, views, and integrity constraints. Figure 15: Database Schema 3.8DATABASE INSTANCE It is important that we distinguish these two terms individually. Database schema is the skeleton of database. It is designed when the database doesn't exist at all. Once the database is operational, it is very difficult to make any changes to it. A database schema does not contain any data or information. JTO PH-II IT Version Page 38 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS A database instance is a state of operational database with data at any given time. It contains a snapshot of the database. Database instances tend to change with time. A DBMS ensures that its every instance (state) is in a valid state, by diligently following all the validations, constraints, and conditions that the database designers have imposed. 3.9DATA INDEPENDENCE A database system normally contains a lot of data in addition to users’ data. For example, it stores data about data, known as metadata, to locate and retrieve data easily. It is rather difficult to modify or update a set of metadata once it is stored in the database. But as a DBMS expands, it needs to change over time to satisfy the requirements of the users. If the entire data is dependent, it would become a tedious and highly complex job. Figure 16: Data Independence Metadata itself follows a layered architecture, so that when we change data at one layer, it does not affect the data at another level. This data is independent but mapped to each other. 3.10 LOGICAL DATA INDEPENDENCE Logical data is data about database, that is, it stores information about how data is managed inside. For example, a table (relation) stored in the database and all its constraints, applied on that relation. Logical data independence is a kind of mechanism, which liberalizes itself from actual data stored on the disk. If we do some changes on table format, it should not change the data residing on the disk. 3.11 PHYSICAL DATA INDEPENDENCE All the schemas are logical, and the actual data is stored in bit format on the disk. Physical data independence is the power to change the physical data without impacting the schema or logical data. JTO PH-II IT Version Page 39 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS For example, in case we want to change or upgrade the storage system itself − suppose we want to replace hard-disks with SSD − it should not have any impact on the logical data or schemas. 3.12 RDBMS RDBMS stands for Relational Database Management Systems.. All modern database management systems like SQL, MS SQL Server, IBM DB2, ORACLE, My-SQL and Microsoft Access are based on RDBMS. It is called Relational Data Base Management System (RDBMS) because it is based on relational model introduced by E.F. Codd.  Codd's 12 Rules Dr Edgar F. Codd, after his extensive research on the Relational Model of database systems, came up with twelve rules of his own, which according to him, a database must obey in order to be regarded as a true relational database. Rule 1: Information Rule The data stored in a database, may it be user data or metadata, must be a value of some table cell. Everything in a database must be stored in a table format. Rule 2: Guaranteed Access Rule Every single data element (value) is guaranteed to be accessible logically with a combination of table-name, primary-key (row value), and attribute-name (column value). No other means, such as pointers, can be used to access data. Rule 3: Systematic Treatment of NULL Values The NULL values in a database must be given a systematic and uniform treatment. This is a very important rule because a NULL can be interpreted as one the following − data is missing, data is not known, or data is not applicable. Rule 4: Active Online Catalog The structure description of the entire database must be stored in an online catalog, known as data dictionary, which can be accessed by authorized users. Users can use the same query language to access the catalog which they use to access the database itself. Rule 5: Comprehensive Data Sub-Language Rule A database can only be accessed using a language having linear syntax that supports data definition, data manipulation, and transaction management operations. This language can be used directly or by means of some application. If the database allows access to data without any help of this language, then it is considered as a violation. Rule 6: View Updating Rule JTO PH-II IT Version Page 40 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS All the views of a database, which can theoretically be updated, must also be updatable by the system. Rule 7: High-Level Insert, Update, and Delete Rule A database must support high-level insertion, updation, and deletion. This must not be limited to a single row, that is, it must also support union, intersection and minus operations to yield sets of data records. Rule 8: Physical Data Independence The data stored in a database must be independent of the applications that access the database. Any change in the physical structure of a database must not have any impact on how the data is being accessed by external applications. Rule 9: Logical Data Independence The logical data in a database must be independent of its user‟s view (application). Any change in logical data must not affect the applications using it. For example, if two tables are merged or one is split into two different tables, there should be no impact or change on the user application. This is one of the most difficult rule to apply. Rule 10: Integrity Independence A database must be independent of the application that uses it. All its integrity constraints can be independently modified without the need of any change in the application. This rule makes a database independent of the front-end application and its interface. Rule 11: Distribution Independence The end-user must not be able to see that the data is distributed over various locations. Users should always get the impression that the data is located at one site only. This rule has been regarded as the foundation of distributed database systems. Rule 12: Non-Subversion Rule If a system has an interface that provides access to low-level records, then the interface must not be able to subvert the system and bypass security and integrity constraints.  How It Works Data is represented in terms of tuples (rows) in RDBMS. Relational database is most commonly used database. It contains number of tables and each table has its own primary key. Due to a collection of organized set of tables, data can be accessed easily in RDBMS. TABLE The RDBMS database uses tables to store data. A table is a collection of related data entries and contains rows and columns to store data.A table is the simplest example of data storage in RDBMS. JTO PH-II IT Version Page 41 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS Figure 17: Database Table FIELD Field is a smaller entity of the table which contains specific information about every record in the table. In the above example, the field in the student table consists of id, name, age, and course. ROW/RECORD A row of a table is also called record. It contains the specific information of each individual entry in the table. It is a horizontal entity in the table. COLUMN A column is a vertical entity in the table which contains all information associated with a specific field in a table. For example: "name" is a column in the above table which contains all information about student's name. NULL VALUES The NULL value of the table specifies that the field has been left blank during record creation. It is totally different from the value filled with zero or a field that contains space. Data Integrity There are the following categories of data integrity exist with each RDBMS: Entity integrity: It specifies that there should be no duplicate rows in a table. Domain integrity: It enforces valid entries for a given column by restricting the type, the format, or the range of values. Referential integrity: It specifies that rows cannot be deleted, which are used by other records. User-defined integrity: It enforces some specific business rules that are defined by users. These rules are different from entity, domain or referential integrity. JTO PH-II IT Version Page 42 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS 3.13 DBMS VS RDBMS Although DBMS and RDBMS both are used to store information in physical database but there are some remarkable differences between them. The main differences between DBMS and RDBMS are given below: No. DBMS RDBMS 1) DBMS applications store data as file. RDBMS applications store data in a tabular form. 2) In DBMS, data is generally stored in In RDBMS, the tables have an identifier called either a hierarchical form or a primary key and the data values are stored in the navigational form. form of tables. 3) Normalization is not present in Normalization is present in RDBMS. DBMS. 4) DBMS does not apply any RDBMS defines the integrity constraint for the security with regards to data purpose of ACID (Atomocity, Consistency, Isolation manipulation. and Durability) property. 5) DBMS uses file system to store data, In RDBMS, data values are stored in the form of so there will be no relation between tables, so a relationship between these data values the tables. will be stored in the form of a table as well. 6) DBMS has to provide some uniform RDBMS system supports a tabular structure of the methods to access the stored data and a relationship between them to access the information. stored information. 7) DBMS does not support distributed RDBMS supports distributed database. database. 8) DBMS is meant to be for small RDBMS is designed to handle large amount of data. organization and deal with small it supports multiple users. data. it supports single user. JTO PH-II IT Version Page 43 of 131 For Restricted Circulation JTO Ph-II DNIT DBMS 9) Examples of DBMS are file Example of RDBMS are mysql, postgre, sql systems, xml etc. server, oracle etc. Table 3. DBMS Vs RDBMS After observing the differences between DBMS and RDBMS, you can say that RDBMS is an extension of DBMS. There are many software products in the market today who are compatible for both DBMS and RDBMS. Means today a RDBMS application is DBMS application and vice-versa. 3.14 CONCLUSION DBMS is perhaps most useful for providing a centralized view of data that can be accessed by multiple users, from multiple locations in controlled manner. The goal of DBMS is to offer more convenience as well as more efficiency to access data from database with high security. A general purpose DBMS is a software system designed to allow the definition, creation, querying, update and administration of databases. JTO PH-II IT Version Page 44 of 131 For Restricted Circulation JTO Ph-II DNIT Cyber Attacks CHAPTER 4 : CYBER ATTACKS 4.1 OBJECTIVE The objectives of this chapter is to understand  Cyber attack and its classification  Reason for cyber attack  Types of cyber attack  Hacking and types of hackers  Some famous cyber attack 4.2INTRODUCTION TO CYBER ATTACK  A cyber attack is “an attack initiated from a computer against a website, computer system or individual computer … that compromises the confidentiality, integrity or availability of the computer or information stored on it. Cyber attacks take many forms.  Their objectives include:  Gaining, or attempting to gain, unauthorized access to a computer system or its data.  Unwanted disruption or denial of service attacks, including the take down of entire web sites.  Installation of viruses or malware - that is malicious code on a computer system.  Unauthorized use of a computer system for processing or storing data.  Changes to the characteristics of a computer system‟s hardware, firmware or software without the owner‟s knowledge, instruction or consent, and  Inappropriate use of computer systems by employees or former employees. 4.3CLASSIFICATION OF CYBER ATTACK Broadly Cyber attack can be classified in two categories :  Insider Attack: An attack to the network or the computer system by some person with authorized system access is known as insider attack. It is generally performed by dissatisfied or unhappy inside employees or contractors. The motive of the insider attack could be revenge or greed. It is comparatively easy for an insider to perform a cyber attack as he is well aware of the policies, processes, IT architecture and weakness of the security system. Moreover, the attacker has an access to the network. Therefore it is comparatively easy for a insider attacker to steel sensitive information, crash the network, etc. In most of the cases the reason for insider attack is when a employee is fired or assigned new roles in an organization, and the role is not reflected in the IT policies. This JTO PH-II IT Version Page 45 of 131 For Restricted Circulation JTO Ph-II DNIT Cyber Attacks opens a vulnerability window for the attacker. The insider attack could be prevented by planning and installing an internal intrusion detection system (IDS) in the organization.  External Attack: When the attacker is either hired by an insider or an external entity to the organization, it is known as external attack. The organization which is a victim of cyber attack not only faces financial loss but also the loss of reputation. Since the attacker is external to the organization, so these attackers usually scan and gathering information. An experienced network/security administrator keeps regular eye on the log generated by the firewalls as external attacks can be traced out by carefully analyzing these firewall logs. Also, Intrusion Detection Systems are installed to keep an eye on external attacks.   The cyber attacks can also be classified as structure attacks and unstructured attacks based on the level of maturity of the attacker.   Unstructured attacks: These attacks are generally performed by amatures who don‟t have any predefined motives to perform the cyber attack. Usually these amatures try to test a tool read

Use Quizgecko on...
Browser
Browser