POPL REVIEWER.pdf
Document Details
Uploaded by SufficientLorentz
USTP
Related
- PCSII Depression/Anxiety/Strong Emotions 2024 Document
- A Concise History of the World: A New World of Connections (1500-1800)
- Human Bio Test PDF
- Vertebrate Pest Management PDF
- Lg 5 International Environmental Laws, Treaties, Protocols, and Conventions
- Fármacos Anticoncepcionais e Disfunção Erétil PDF
Full Transcript
The Evolution of Programming Languages Questions to Ask: Who is considered the first computer programmer and why? What were some of the earliest programming languages and their purposes? 1883: The Journey starts from here…!! In the early days, Charles Babbage had made the device (Analytical Engine...
The Evolution of Programming Languages Questions to Ask: Who is considered the first computer programmer and why? What were some of the earliest programming languages and their purposes? 1883: The Journey starts from here…!! In the early days, Charles Babbage had made the device (Analytical Engine and Difference Engine), but he was confused about how to give instructions to the machine, and then Ada Lovelace wrote the instructions for the analytical engine. The device was made by Charles Babbage and the code was written by Ada Lovelace for computing Bernoulli’s number. First time in history that the capability of computer devices was judged. Analytical and Difference Engine Machine Code and Assembly Language (1940s-1950s) The earliest programming languages were closely tied to the hardware architecture of computers. Machine code, consisting of binary instructions executed directly by the CPU, was the most primitive form of programming. As computers became more complex, assembly language emerged as a more human-readable alternative, with mnemonic codes representing machine instructions. Notable Milestone: In 1949, the development of Assembly for UNIVAC I by Grace Hopper marked a significant advancement in programming languages, allowing programmers to use symbolic representations of machine instructions. 1949: Assembly Language It is a type of low-level language. It mainly consists of instructions (kind of symbols) that only machines could understand. In today’s time also assembly language is used in real-time programs such as simulation flight navigation systems and medical equipment e.g. – Fly-by-wire (FBW) systems. It is also used to create computer viruses. 1952: Autocode Developed by Alick Glennie. The first compiled computer programming language. COBOL and FORTRAN are the languages referred to as Autocode. High-Level Languages (1950s-1960s) The introduction of high-level languages abstracted away hardware-specific details, making programming more accessible and productive for developers. Languages like FORTRAN (1957), COBOL (1959), and Lisp (1958) paved the way for higher-level abstractions and introduced concepts such as variables, loops, and functions. Notable Milestone: In 1957, John Backus and his team at IBM developed FORTRAN (Formula Translation), the first high-level programming language, designed for scientific and engineering computations. 1957: FORTRAN Developers are John Backus and IBM. It was designed for numeric computation and scientific computing. Software for NASA probes voyager-1 (space probe) and voyager-2 (space probe) was originally written in FORTRAN 5. 1958: ALGOL ALGOL stands for ALGOrithmic Language. The initial phase of the most popular programming languages of C, C++, and JAVA. It was also the first language implementing the nested function and has a simple syntax than FORTRAN. The first programming language to have a code block like “begin” that indicates that your program has started and “end” means you have ended your code. 1959: COBOL It stands for COmmon Business-Oriented Language. In 1997, 80% of the world’s business ran on Cobol. The US internal revenue service scrambled its path to COBOL-based IMF (individual master file) in order to pay the tens of millions of payments mandated by the coronavirus aid, relief, and economic security. Procedural Languages and Structured Programming (1960s-1970s) The 1960s and 1970s witnessed the rise of procedural programming languages, emphasizing the use of procedures or subroutines to structure code. Languages like ALGOL (1958), Pascal (1970), and C (1972) introduced structured programming concepts such as loops, conditionals, and modularization. Notable Milestone: In 1972, Dennis Ritchie developed the C programming language at Bell Labs, which became widely adopted for system programming and laid the foundation for modern operating systems like Unix. 1964: BASIC It stands for beginners All-purpose symbolic instruction code. In 1991 Microsoft released Visual Basic, an updated version of Basic The first microcomputer version of Basic was co-written by Bill Gates, Paul Allen, and Monte Davidoff for their newly-formed company, Microsoft. 1972: C It is a general-purpose, procedural programming language and the most popular programming language till now. All the code that was previously written in assembly language gets replaced by the C language like operating system, kernel, and many other applications. It can be used in implementing an operating system, embedded system, and also on the website using the Common Gateway Interface (CGI). C is the mother of almost all higher-level programming languages like C#, D, Go, Java, JavaScript, Limbo, LPC, Perl, PHP, Python, and Unix’s C shell. Object-Oriented Programming (1980s-1990s) The 1980s and 1990s saw the emergence of object-oriented programming (OOP) languages, which revolutionized software development by organizing code around objects and classes. Languages like Smalltalk (1972), C++ (1983), and Java (1995) popularized OOP concepts such as inheritance, encapsulation, and polymorphism. Notable Milestone: In 1983, Bjarne Stroustrup developed C++, an extension of the C language with support for object-oriented programming, templates, and exception handling, leading to widespread adoption in game development, system programming, and enterprise applications. Modern Languages and Specialized Domains (2000s-Present) In the 21st century, programming languages have continued to evolve to meet the diverse needs of developers and industries. Domain-specific languages (DSLs) have emerged to address specific domains such as web development, data science, and artificial intelligence. Languages like Python (1991), JavaScript (1995), and Ruby (1995) have gained popularity for their versatility, ease of use, and extensive libraries and frameworks. Notable Milestone: In 1991, Guido van Rossum released Python, a high-level programming language known for its simplicity, readability, and extensive standard library. Python has become one of the most widely used languages in various domains, including web development, data analysis, machine learning, and scientific computing. Why is it important to study multiple programming languages? How does knowledge of different programming languages benefit a software developer? Language Syntax and Semantics Syntax: The structure or form of expressions, statements, and program units. Semantics: The meaning of those expressions, statements, and program units. Why is syntax important in programming languages? Syntax in programming languages is crucial for several reasons: Readability Proper syntax makes code easier to read and understand, both for humans and machines. This is essential for collaboration, as other developers can quickly grasp what the code is doing. Maintainability Code with clear syntax is easier to maintain and update. When the structure is consistent and follows established rules, it simplifies the process of making changes or adding new features. Debugging Good syntax helps in identifying and fixing errors. Most programming environments provide clear error messages when syntax rules are violated, making it easier to locate and correct mistakes. Efficiency Well-structured code can be more efficiently compiled or interpreted by the computer, leading to better performance. Scalability Clean syntax supports the development of scalable software. As projects grow, maintaining a consistent syntax ensures that the codebase remains manageable. In essence, syntax is like the grammar of a programming language. Just as proper grammar is essential for clear communication in human languages, proper syntax is vital for effective programming. How do formal grammars help in defining programming languages? Formal grammars play a crucial role in defining programming languages by providing a structured way to describe the syntax of these languages. Here are some key points on how they help: Syntax Specification Formal grammars define the syntax rules of a programming language. They specify how valid sentences (or programs) in the language are constructed from the language’s alphabet. This ensures that any program written in the language adheres to a consistent structure. Parsing During the compilation process, a parser uses the formal grammar to analyze the source code. It checks whether the code follows the syntax rules defined by the grammar. If the code violates these rules, the parser generates syntax errors. Language Design When designing a new programming language, formal grammars help language designers specify the syntax clearly and unambiguously. This makes it easier to implement compilers and interpreters for the language. How do formal grammars help in defining programming languages? Compiler Construction Formal grammars are used to generate parsers automatically. Tools like Yacc (Yet Another Compiler-Compiler) and ANTLR (Another Tool for Language Recognition) take a formal grammar as input and produce a parser that can recognize valid programs in the language. Documentation and Communication Formal grammars provide a precise and unambiguous way to document the syntax of a programming language. This helps developers understand the language better and ensures consistent implementation across different platforms. Formal Grammars A formal grammar is a set of rules for generating strings in a language. Types of Grammars: Context-Free Grammars (CFG): Used to describe the syntax of programming languages. Backus-Naur Form (BNF): A notation for expressing CFGs. Example: ::= + | ::= * | ::= ( ) | number What are the benefits of using syntax trees? Syntax trees, also known as abstract syntax trees (ASTs), offer several benefits in programming and compiler design: Code Representation Syntax trees provide a structured representation of the source code, making it easier to analyze and manipulate. This is particularly useful for tasks like code optimization and transformation. Syntax Highlighting In text editors and integrated development environments (IDEs), syntax trees can be used to provide syntax highlighting, which helps developers read and understand the code more easily. Refactoring Syntax trees can identify areas of the code that need refactoring, such as duplicated code or complex conditional statements. This helps in maintaining cleaner and more efficient code. What are the benefits of using syntax trees? Type Checking They are essential for type checking, ensuring that operations in the code are performed on compatible data types. Code Generation Syntax trees are used in the code generation phase of compilers to produce the target code from the source code. Optimization They enable various optimization techniques by providing a clear structure of the code, which can be analyzed and improved. Error Detection Syntax trees help in detecting and reporting syntax errors in the code, making debugging easier. Syntax Trees A tree representation of the syntactic structure of a string according to a formal grammar. Components: Nodes: Represent constructs occurring in the source code. Edges: Represent the syntactic relationships between constructs. Example: + /\ * 3 /\ 2 4 Semantic Analysis The phase in which the compiler adds semantic information to the parse tree and builds the symbol table. Tasks: Type Checking: Ensuring that operations are performed on compatible types. Scope Resolution: Determining the scope of variables and functions. Array Bound Checking: Ensuring that array indices are within bounds. Example: int a = "value"; // Semantic error: type mismatch What are some common semantic errors in programming? Semantic errors, also known as logic errors, occur when a program runs without crashing but produces incorrect results. Here are some common types of semantic errors: Incorrect Use of Variables Using a variable before it has been initialized or using the wrong variable in a calculation. Conditional Logic Errors Mistakes in the logic of conditional statements or loops, such as using >= instead of >. Type Mismatches Performing operations on incompatible types, like adding a string to an integer without proper conversion. What are some common semantic errors in programming? Array Index Out of Bounds Accessing an array element outside its valid range. Faulty Algorithm Design Implementing an algorithm incorrectly, leading to unexpected behavior. Incorrect Function Usage Misunderstanding how a function works or passing incorrect arguments to a function. These errors can be tricky to detect because the program doesn’t crash or produce error messages. Instead, it produces incorrect results, which requires careful testing and debugging to identify and fix. Sharing programming experience in class… Do you have any specific examples or scenarios in mind where you’ve encountered semantic errors? Attribute Grammars A CFG with additional information (attributes) attached to its non-terminals. Types of Attributes: Synthesized Attributes: Values are computed from the attributes of the children nodes. Inherited Attributes: Values are passed down from parent nodes. Example: E -> E + T { E.value = E.value + T.value } How do attribute grammars enhance the expressiveness of CFGs? Attribute grammars extend context-free grammars (CFGs) by adding semantic rules that provide context-sensitive information, which CFGs alone cannot handle effectively. Here are some key ways attribute grammars enhance the expressiveness of CFGs: Semantic Information Attribute grammars allow the inclusion of semantic information through attributes associated with grammar symbols. These attributes can be synthesized (passed up the parse tree) or inherited (passed down the parse tree) to convey context-sensitive information. Static Semantics They enable the definition of static semantics, such as type checking and scope resolution, which are crucial for programming languages. This means that attribute grammars can enforce rules that depend on the context, like ensuring variables are declared before use. How do attribute grammars enhance the expressiveness of CFGs? Decorating Parse Trees Attribute grammars “decorate” parse trees with additional information. For example, they can compute the types of expressions or the values of constants during parsing, which CFGs alone cannot do. Disambiguation By using attributes, attribute grammars can resolve ambiguities in CFGs. They can specify additional constraints and rules that help in choosing the correct parse tree when multiple possibilities exist. Modularity They provide a modular way to add semantic rules to a grammar without altering the underlying CFG. This makes it easier to extend and maintain the grammar. In summary, attribute grammars enhance CFGs by adding the ability to handle context-sensitive information, enforce semantic rules, and resolve ambiguities, making them more powerful and expressive for defining programming languages. Data Types and Structures Data types are classifications that specify which type of value a variable can hold in programming. Importance: Understanding data types is crucial for efficient programming and avoiding errors. Imagine trying to store a name in a variable meant for numbers. This would cause errors and confusion. Data Types Primitive Data Types Composite Data Types Type Systems Primitive Data Types Basic data types provided by a programming language as building blocks. Example: Integer (int): Represents whole numbers. Example: int age = 25; (Here, age can only hold whole numbers like 25, 30, etc.) Floating Point (float, double): Represents numbers with fractional parts. Example: float price = 19.99; (Useful for prices, measurements, etc.) Primitive Data Types Character (char): Represents single characters. Example: char grade = 'A’; (Used for letters, symbols, etc.) Boolean (bool): Represents true/false values. Example: bool isStudent = true; (Great for conditions and flags) Composite Data Types Data types that are composed of multiple primitive data types. Examples: Array: A collection of elements of the same type. Example: int numbers[] = {1, 2, 3, 4}; An array of integers. Arrays are like lists that can store multiple values of the same type. Structure (struct): A collection of variables of different types. Example: struct Person { char name; int age; float salary; }; Structures group different types of data together, like a person’s name, age, and salary. Composite Data Types Class: A blueprint for creating objects (instances of classes). Example: class Car { String model; int year; double price; } Classes are more advanced structures that include both data and methods to manipulate that data. Type Systems A set of rules that assigns a property called a type to the various constructs of a computer program. Types: Static Typing: Types are known at compile-time. Example: int x = 10; The type of x is known when the code is compiled Helps catch errors early but requires more upfront declaration. Dynamic Typing: Types are known at runtime. Example: x = 10; The type of x is determined when the code runs More flexible but can lead to runtime errors. Strong Typing: Enforces strict type rules. Example: int x = "hello"; // Error You can’t assign a string to an integer variable Prevents type errors but can be restrictive. Weak Typing: More flexible with type rules. Example: x = "hello"; // Allowed The type can change dynamically Easier to write but can lead to unexpected behaviors. Summary Primitive Data Types: Basic building blocks like int, float, char, and bool. Composite Data Types: Complex types like arrays, structures, and classes. Type Systems: Rules for type assignment, including static, dynamic, strong, and weak typing. Introduction to Programming Paradigms A programming paradigm is a fundamental style or approach to programming that dictates how solutions to problems are formulated and implemented in a programming language.It provides a framework for thinking about and structuring code. Importance of Understanding Different Paradigms Versatility: Knowing multiple paradigms makes you a more versatile programmer, capable of choosing the best approach for different problems. Problem-Solving: Different paradigms offer different tools and techniques for solving problems, which can lead to more efficient and effective solutions. Adaptability: Understanding various paradigms helps you adapt to new languages and technologies more easily. Innovation: Exposure to different paradigms can inspire innovative thinking and new ways of approaching programming challenges. Overview of Paradigms Covered Procedural Programming: Focuses on a sequence of instructions or procedures to perform tasks. Examples: C, Pascal. Object-Oriented Programming (OOP): Organizes code around objects, which are instances of classes. Examples: Java, C++. Functional Programming: Treats computation as the evaluation of mathematical functions and avoids changing state and mutable data. Examples: Haskell, Lisp. Logic Programming: Based on formal logic, where you define rules and facts, and the system derives conclusions. Examples: Prolog, Datalog. Procedural Programming Procedural programming is a paradigm based on the concept of procedure calls. It involves writing a sequence of instructions to perform a specific task. Key Characteristics: Instructions are executed in a specific order, one after the other. Example: #include void greet() { printf("Hello, World!\n"); } int main() { greet(); return 0; } Procedural Programming Control structures like loops (for, while) and conditionals (if, else) manage the flow of the program. Example: #include int main() { int i; for (i = 0; i < 5; i++) { if (i % 2 == 0) { printf("%d is even\n", i); } else { printf("%d is odd\n", i); } } return 0; } Procedural Programming Code is organized into reusable blocks called functions or procedures, which can be called multiple times within a program. #include void printEvenOrOdd(int num) { if (num % 2 == 0) { printf("%d is even\n", num); } else { printf("%d is odd\n", num); } } int main() { int i; for (i = 0; i < 5; i++) { printEvenOrOdd(i); } return 0; } Object-Oriented Programming (OOP) Object-Oriented Programming (OOP) is a paradigm based on the concept of “objects,” which are instances of classes that contain data (attributes) and methods (functions). Object-Oriented Programming (OOP) Key Characteristics: Encapsulation: Bundling data and methods that operate on the data within one unit, or class, and restricting access to some of the object’s components. Example: public class Person { private String name; private int age; public Person(String name, int age) { this.name = name; this.age = age; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } } Inheritance: Mechanism where a new class inherits properties and behavior (methods) from an existing class. Example: public class Animal { public void makeSound() { System.out.println("Some sound"); } } public class Dog extends Animal { @Override public void makeSound() { System.out.println("Bark"); } } Polymorphism: Ability to process objects differently based on their data type or class, allowing for methods to be used interchangeably. Example: public class Main { public static void main(String[] args) { Animal myDog = new Dog(); myDog.makeSound(); // Output: Bark } } Abstraction: Hiding complex implementation details and showing only the necessary features of an object. Example: abstract class Shape { abstract void draw(); } class Circle extends Shape { @Override void draw() { System.out.println("Drawing a circle"); } } Functional Programming Functional programming is a paradigm that treats computation as the evaluation of mathematical functions. It emphasizes the use of pure functions and avoids changing state and mutable data. Key Characteristics: Pure Functions: Functions that, given the same inputs, always return the same output and do not have side effects (i.e., they do not modify any external state). Example: -- Example in Haskell add :: Int -> Int -> Int add x y = x + y Functional Programming Immutability: Data is immutable, meaning it cannot be changed once created. Instead, new data structures are created from existing ones. Example: -- Example in Haskell let x = 5 let y = x + 1 -- x remains 5, y is 6 Functional Programming First-Class Functions: Functions are treated as first-class citizens, meaning they can be assigned to variables, passed as arguments, and returned from other functions. Example: -- Example in Haskell applyFunction :: (Int -> Int) -> Int -> Int applyFunction f x = f x double :: Int -> Int double x = x * 2 main = print (applyFunction double 4) -- Output: 8 Functional Programming Higher-Order Functions: Functions that take other functions as arguments or return them as results. Example: -- Example in Haskell map :: (a -> b) -> [a] -> [b] map f [] = [] map f (x:xs) = f x : map f xs main = print (map double [1, 2, 3]) -- Output: [2, 4, 6] Sample of Functional Programming languages Haskell: A purely functional programming language known for its strong static typing and lazy evaluation. Lisp: One of the oldest programming languages, known for its powerful macro system and symbolic computation. Erlang: Designed for concurrent and distributed systems, widely used in telecommunications. Logic Programming Logic programming is a paradigm based on formal logic. It involves writing programs by defining facts and rules about problems within a system of formal logic. Key Characteristics: Declarative Nature: Logic programming is declarative, meaning you specify what the program should accomplish rather than how to accomplish it. Example: % Example in Prolog parent(john, mary). parent(mary, susan). Use of Facts and Rules: Programs consist of facts and rules. Facts represent basic assertions about the problem domain, while rules define relationships between facts. Example: % Example in Prolog parent(john, mary). parent(mary, susan). grandparent(X, Y) :- parent(X, Z), parent(Z, Y). Backtracking and Pattern Matching: The system automatically searches for solutions by trying different possibilities (backtracking) and matching patterns to find valid solutions. Example: % Example in Prolog ?- grandparent(john, susan). % Output: true Sample of Logic Programming languages Prolog: A widely used logic programming language known for its applications in artificial intelligence and computational linguistics. Datalog: A subset of Prolog, often used for database queries and data integration.