ITF Chapter 6 Software Development PDF

Summary

This document provides an overview of foundational software development concepts and different programming languages. It covers notational systems, data types, programming language categories, and programming concepts. The document is a CompTIA past paper, but the specific exam year is not mentioned.

Full Transcript

Chapter 6 Software Development THE FOLLOWING COMPTIA IT FUNDAMENTALS+ (ITF+) FC0-U61 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:  1.1 Compare and contrast notational systems  Binary  Hexadecimal  Decimal  Data representa...

Chapter 6 Software Development THE FOLLOWING COMPTIA IT FUNDAMENTALS+ (ITF+) FC0-U61 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:  1.1 Compare and contrast notational systems  Binary  Hexadecimal  Decimal  Data representation  ASCII  Unicode  1.2 Compare and contrast fundamental data types and their characteristics  Char  Strings  Numbers  Integers  Floats  Boolean  4.1 Compare and contrast programming language categories  Interpreted  Scripting languages  Scripted languages  Markup languages  Compiled programming languages  Query languages  Assembly language  4.2 Given a scenario, use programming organizational techniques and interpret logic  Organizational techniques  Pseudocode concepts  Flow chart concepts  Sequence  Logic components  Branching  Looping  4.3 Explain the purpose and use of programming concepts  Identifiers  Variables  Constants  Containers  Arrays  Vectors  Functions  Objects  Properties  Attributes  Methods It’s easy to imagine the creators of ENIAC or other early computers looking at each other and saying, “There has to be an easier way.” They had created what was (at the time) an engineering marvel—an automated computer that could do thousands of mathematical operations per second. It was significantly faster than any human at doing math, but there was a problem. The developers had to tell the computer what to do, and that “programming” process could take weeks for a team of engineers to create and debug. Any change in the program, even a simple one such as telling it to subtract instead of add, took hours of changes and testing. And if they had a different math equation altogether, it required the creation of an entirely new program. There had to be an easier way. Fast-forward to today, and it’s impossible to count the number of software titles in existence. From operating systems to productivity software to games to—you name it—the breadth and depth of available programs would blow the minds of those early computer pioneers. One thing hasn’t changed, though: People still need to tell computers what to do. Computers can’t independently think for themselves. Perhaps that will change (for better or worse) with advances in artificial intelligence, but for now computers need instructions, and those instructions are delivered via preprogrammed software packages. It is the easier way. This chapter gives you an overview to foundational software development concepts. It’s not intended to turn you into a programmer—one chapter could not do that task justice. Rather, at the end of this chapter, you will understand the basics of different programming languages as well as some concepts and techniques that programmers use to make their jobs easier. If you find these topics interesting and want to pursue a path that includes programming, there are ample courses, books, and online materials to help you get there. Exploring Programming Languages All of the software that you use today, whether it’s on your smartphone, a workstation, or a cloud-based app, was created by a programmer. More than likely, it was created by a team of them. Small and simple programs might have only a few hundred lines of code, whereas an elaborate program like Windows 10 reportedly has about 65 million lines. Now, most developers will tell you that lines of code are a terrible measure for anything—the goal is to get the desired functionality with as little code as possible—but the Windows 10 statistic underscores the complexity of some applications. I use the terms programmers, developers, and coders interchangeably in this chapter. Just as there are a large number of software titles on the market, numerous programming languages exist. Each language has its own grammar and syntax, much like the rules of English are different from the rules of Spanish. Software developers typically specialize in one or two languages but have a basic foundational understanding of the popular ones in the marketplace. For example, a coder might identify as a C++ programmer but also know a scripting language like JavaScript. Because of the similarities between language concepts, the coder can look at code from an unfamiliar language and generally understand what it does. In this section, you will learn about four categories of programming languages: assembly, compiled, interpreted, and query. ASSEMBLY LANGUAGE Any programmer writing code is telling the computer—specifically, the processor— what to do. Assembly language is the lowest level of code in which people can write. That is, it allows the developer to provide instructions directly to the hardware. It got its name because after it’s created, it’s translated into executable machine code by a program called an assembler. Assembly code is specific to processor architectures. A program written for a 32-bit Intel chip will look different than one written for an ARM chip, even if the functionality is identical. Originally developed in 1947 at the University of London, assembly was the primary programming language used for quite some time. Operating systems such as IBM’s PC DOS and programs like the Lotus 1-2-3 spreadsheet software were coded in assembly, as well as some console-based video games. In the 1980s, higher-level languages overtook assembly in popularity. Even though it doesn’t dominate the landscape, assembly is still used today because it has some advantages over higher-level languages. It can be faster. It’s also used for direct hardware access, such as in the system BIOS, with device drivers, and in customized embedded systems. It’s also used for the reverse engineering of code. Translating high-level code into assembly is fairly straightforward, but trying to disassemble code into a higher-level language might not be. On the downside, some virus programmers like it because it’s closer to the hardware. Understanding Notational Systems Before I get into how assembly works or what it looks like, it’s important to take a step back and think about how computers work. Computers only understand the binary notational system—1s and 0s. Everything that a computer does is based on those two digits—that’s really profound. When you’re playing a game, surfing the web, or chatting with a friend using your computer, it’s really just a tremendously long string of 1s and 0s. Recall from Chapter 1, “Core Hardware Components,” the basic organizational structure of these 1s and 0s. One digit (either a 1 or a 0) is a bit, and eight digits form a byte. There are a lot of real-life practical examples of binary systems. Take light switches, for example. A conventional light switch is either on (1) or off (0). Ignoring dimmable lighting for a minute, with a traditional switch, the light is in one of two distinct states: on or off. This is binary, also known as “base 2” because there are two distinct values. Humans are far more used to counting in base 10, which is the decimal notational system. In decimal, the numbers 0 through 9 are used. To show the number that’s one larger than 9, a second digit is added in front of it and reset the rightmost digit to 0. This is just a complicated way of telling you something you already know, which is 9 + 1 = 10. Binary math works much the same way. The binary value 1 equals a decimal value of 1. If you add 1 + 1 in binary, what happens? The 1 can’t increase to 2, because only 1s and 0s are allowed. So, the 1 resets to a 0, and a second digit is added in front of it. Thus, in binary, 1 + 1 = 10. Then, 10 + 1 = 11, and 11 + 1 = 100. If you’re not accustomed to looking at binary, this can be a bit confusing! Now think about the structure of a byte, which is eight bits long. If you want to convert binary to decimal, then the bit’s position in the byte determines its value. Table 6.1 illustrates what I mean. TABLE 6.1 Converting binary to decimal Position 8 7 6 5 4 3 2 1 Bit 1 1 1 1 1 1 1 1 Base 2 7 2 6 2 5 2 4 2 3 2 2 2 1 2 0 Value 128 64 32 16 8 4 2 1 If the bit is set to 1, it has the value shown in the value row. If it’s set to 0, then its value is 0. Using the example from a few paragraphs ago, you can now see how binary 100 equals a decimal 4. The binary number 10010001 is 128 + 16 + 1 = 145 in decimal. Using one byte, decimal values between 0 and 255 can be represented. It’s unlikely that you will be asked to perform binary to decimal conversion on the IT Fundamentals+ (ITF+) exam. It’s a good concept to understand, though, and it’s material to understanding how assembly works! To take things one step further, there’s a third system used commonly in programming, which is the hexadecimal notational system, or base 16. You’ll also see it referred to as hex. In hex, the numbers 0 to 9 are used, just like in decimal. However, the letters A to F are used to represent decimal numbers 10 through 15. So, in hex, F + 1 = 10. Aren’t notational systems fun? The key when dealing with numbers in programming is to understand clearly which notational system you’re supposed to be using. Exercise 6.1 will give you some practice converting between the three systems. This is very similar to Exercise 1.1 that you did back in Chapter 1, but having familiarity with these systems is important. EXERCISE 6.1 Converting Between Decimal and Other Numbering Systems 1. In Windows 10, open the Calculator application. 2. Click the Calculator menu button, and choose Programmer to switch to Programmer view, as shown in Figure 6.1. Notice on the left that there is a mark next to the DEC option because it’s set to decimal. FIGURE 6.1 Calculator in Programmer view 3. 4. Enter the number 42. 5. Notice that the calculator shows you the hexadecimal (HEX), decimal (DEC), octal (OCT), and binary (BIN) conversions of the number. If you have an older version of Calculator, you will need to click the radio buttons next to these options to perform the conversion. 6. Now enter the number 9483. Notice that in binary, two bytes worth of bits are displayed, and there are four hex characters used. 7. Experiment with other numbers. What would your birthdate look like in binary or hex? Close the calculator when you are finished. Some programming languages will use the prefix 0x in front of a number to indicate that it’s in hexadecimal. For example, you might see something like 0x16FE. That just means the hex number 16FE. Other languages will use an h suffix, so it would be written as 16FEh. Binary, decimal, and hex work great for representing numbers, but what about letters and special characters? There are notational systems for these as well. The first is American Standard Code for Information Interchange (ASCII), which is pronounced ask-e. ASCII codes represent text and special characters on computers and telecommunications equipment. The standard ASCII codes use seven bits to store information, which provides for only 128 characters. Therefore, standard ASCII only has enough space to represent standard English (Latin) uppercase and lowercase letters, numbers 0 to 9, a few dozen special characters, and (now obsolete) codes called control codes. Table 6.2 shows you a small sample of ASCII codes. You can find the full table at www.asciitable.com. TABLE 6.2 Sample ASCII codes Dec Hex HTML Character 33 21 ! ! 56 38 8 8 78 4E N N 79 4F O O 110 6E n n Covering only the Latin alphabet isn’t very globally inclusive, so a superset of ASCII was created called Unicode. The current version of Unicode supports 136,755 characters across 139 language scripts and several character sets. Unicode has several standards. UTF-8 uses 8 bits and is identical to ASCII. UTF-16 uses 16 bits (allowing for 65,536 characters, covering what’s known as the Basic Multilingual Plane) and is the most common standard in use today. UTF-32 allows for coverage of the full set of characters. The Unicode table is at unicode-table.com/en. Working with Assembly Coding in assembly is not for the faint of heart. As I mentioned earlier, you need to know the version specific to the processor’s platform. In addition, you need to know how memory segmentation works and how processor codes will respond in protected and unprotected memory environments. There’s more to it than those few criteria, but suffice it to say that it’s challenging work. Let’s start with a simple example, remembering that all computers understand are 1s and 0s. Say that you have a 32-bit Intel processor and want to move a simple 8-bit number into a memory register. The binary code to tell the processor to move data is 10110, followed by a 3-bit memory register identifier. For this example, you’ll use the lowest part of the accumulator register (you won’t need to know this for the exam), which is noted as AL. The code for this register is 000. Finally, you need to tell the CPU the number that you want to move into this register—you’ll use 42. In binary, the command looks like this: 10110000 00101010 That’s not very user friendly or easy to remember. Using decimal to hex conversion, you can simplify it to the following: B0 2A That literally means, “Move into memory register AL the number 42.” It's still not very user friendly. To help ease the challenge, assembly developers have created mnemonic codes to help programmers remember commands. For example, the command MOV (short for move) is a mnemonic to replace the binary or hex code. So, the command can now be written as follows: MOV AL, 2Ah ;Move the number 42 (2A hex) into AL You’ll notice a few things here. The first is that the command MOV AL is much easier to remember than the binary code, and it makes more sense in human terms than does B0. The second is that I added some real words after a semicolon. Most languages allow coders the ability to add comments that are not processed. In assembly, anything on a line after a semicolon is considered a comment and ignored by the processor. So, to summarize the basic structure of a line of code, it contains processor instructions (“do this”), directives (defining data elements or giving the processor specific ways of performing the task), data, and optional comments. This is pretty much true for all programming languages, although the structure and syntax will vary. As I wrap up this section on assembly, I want to leave you with one small gift. A tradition in pretty much every programming class is that the first program you are taught to write is how to display “Hello, world!” on the screen. The way to create this friendly greeting varies based on the language, so when I cover various languages, I am going to show you what it looks like or have you do it. The intent isn’t to have you memorize the code or learn to program but to give you a feel for what the code looks like for an actual application. So, without further ado, here is “Hello, world!” in all of its assembly glory: section.text global _start ;must be declared for linker (ld) _start: ;tells linker entry point mov edx,len ;message length mov ecx,msg ;message to write mov ebx,1 ;file descriptor mov eax,4 ;system call int 0x80 ;call kernel section.datamsg db 'Hello, world!', 0xa ;the message! len equ $ - msg ;length of the string When the code is assembled and executed, it will display the following on the screen: Hello, world! COMPILED LANGUAGES High-level languages have replaced assembly as the most commonly used ones in software development today. When creating a new application, the developer must decide between a compiled language and an interpreted language. A compiled programming language is one that requires the use of a compiler to translate it into machine code. Creating and using a program using a compiled language consists of three steps: 1. Write the application in a programming language, such as Java or C++. This is called the source code. 2. Use a complier to translate the source code into machine code. Most software development applications have a compiler. 3. Execute the program file, which (in Windows) usually has an.exe extension. There are dozens of compiled languages a programmer can choose from, but unless there is a specific need, the programmer is likely going to go with one of the more common ones. In days past, the choices might have been Fortran, BASIC, or Pascal. Now, Java, C, C++ (pronounced C plus plus), and C# (pronounced C sharp) are the most popular. The Linux and Windows kernels are written in C. The rest of Windows is written mostly in C++, with a bit of custom assembly thrown in for good measure. Let’s take a look at the source code for “Hello, world!” in Java, which is the most popular programming language in use today: public class HelloWorld { public static void main(String[] args) { // Prints “Hello, world!” in the terminal window. System.out.println(“Hello, world!”); }} Compare and contrast the Java code to assembly. A few things might jump out. First, the program is a lot shorter. Second, the syntax is different. Java uses braces (the { and }) to indicate code blocks. Notice that for every open brace, there is a corresponding close brace. Single-line comments in Java are preceded with two slashes (//) instead of a semicolon. Even little things—assembly uses single quotes around the words that you want to print while Java uses double quotes—are different. As for the rest of the context, don’t worry about understanding it all right now. Again, the point is to just get a feel for what some basic code looks like. The next stop on your tour of compiled languages is C++. Let’s look at source code for your new favorite program: // Header file#include using namespace std; // the main function is where the program execution beginsint main(){ // the message to the world cout

Use Quizgecko on...
Browser
Browser