Propositional Logic PDF

Document Details

FortunateExuberance1272

Uploaded by FortunateExuberance1272

Tags

propositional logic logic mathematical logic discrete mathematics

Summary

This document provides reading material on propositional logic, including logical operators, truth tables, and logical equivalences. It covers concepts like negation, conjunction, disjunction, XOR, implication, and biconditional.

Full Transcript

Propositional Logic Proposition: ​ ​ A proposition is a declarative sentence that is either true or false, but not both. ​ ​ T if True.​ F if False. Atomic Propositions: Propositions which can’t be expressed in terms of simpler propositions. Compound propositions: A proposition formed using atomic p...

Propositional Logic Proposition: ​ ​ A proposition is a declarative sentence that is either true or false, but not both. ​ ​ T if True.​ F if False. Atomic Propositions: Propositions which can’t be expressed in terms of simpler propositions. Compound propositions: A proposition formed using atomic propositions using logical operators. ​ Logical Operators: 1.​ Negation (~): ​ Notations: ∼p, −p, p′, Np, and !p. ​ Terminology: “Not p” ​ Truth Table: 2.​ Conjunction/ AND (^): ​ Notations: p ^ q​ ​ ​ Terminology: “p and q” ​ Truth table: ​ Key Fact: Conjunction of p, q is true only if both p and q are T 3.​ Disjunction/ Inclusive OR (v): Notation: p v q​ ​ Terminology: “p or q” ​ Truth Table: ​ Key Fact: Disjunction of p, q is false only if both p and q are false. 4.​ XOR/ Exclusive OR (⊕): Notation: p ⊕ q Terminology: “p exclusive or q”/ “p xor q” Truth Table: Key Fact: The XOR of p, q is false if both have same truth values, i.e, p ⊕ q is False if p is false and q is false or if p is true and q is true. 5.​ Implication/ conditional (->): Notation: p -> q​ ​ ​ Terminology: ​ ​ “p implies q”. ​ “If p then q” ​ “If p, q” ​ “p only if q” ​ “p is sufficient for q” ​ “a sufficient condition for q is p” ​ “q if p” ​ “q whenever p” ​ “q when p” ​ “q is necessary for p” ​ “q follows from p” ​ “q unless ~p” ​ “ q provided that p”​ Truth Table: ​ Key Fact: ​ If p is False, then p -> q is automatically True. ​ If q is True, then p -> q is automatically True. ​ The only way p -> q is False is when p is True and q is False. ​ 6.​ Biconditional/ Logical equality (↔): Notation: p ↔ q ​ Terminology: “p if and only if q” (or) “p iff q” Truth Table: ​ Note: A biconditional is true if the truth values of both p and q are same, i.e if both are true or if both are false. Converse of an implication: The converse of an implication p -> q is q -> p. ​ Inverse of an implication: The inverse of an implication p -> q is ~p -> ~q Contrapositive of an implication: The contrapositive of an implication p -> q is ~q -> ~p. Logical Equivalences: ​ Two or more compound propositions are equivalent if they have the same truth values, regardless of the truth values of the propositional variables. Example: implication and its contrapositive, p -> q ≡ ~p v q Important Logical Equivalences: ​ Logical Equivalences involving Conditionals: Logical Equivalences involving Biconditionals: Demorgan’s Laws: ​ 1.​ ~(p ^ q) ≡ ~p v ~q 2.​ ~(p v q) ≡ ~p ^ ~q 3.​ p -> q ≡ ~p v q Counter Example: An example or instance which proves that two or more compound propositions are not logically equivalent is called as a counter example. Tautology: ​ A compound proposition that is always true for every combination of truth values of its propositional variables.​ ​ Example: p v ~p Contradiction: A compound proposition that is always false for every combination of truth values of its propositional variables. Example: p ^ ~p Predicates & Quantifiers Predicates: Any statement of the form P(x1, x2, …, xn) is a propositional function P at the n – tuple (x1, x2, …, xn) and P Is also called the n – ary predicate or the n – place predicate. ​ Analogy: Think of predicates as functions which take the values of the variables as input and return an output True or False based on the truth value. sldkkjfsj Domain/ Universe of Discourse: It is the set from which the n – tuples/ variables in the predicate take their values from. Example: The domain could be ​ ​ ​ - Set of Natural Numbers​ ​ ​ - Set of whole numbers, etc.. Quantifiers: ​ Quantifiers are expressions/ phrases that explains about how many elements/ objects a statement applies to. Types of quantifiers: 1.​ Universal quantifier: If P(x) is a predicate, then the statement “P(x) is true for all values of x in the domain” is called as universal quantification. ​ It is denoted by the symbol ∀ and expressed as ∀ P(x). 2.​ Existential quantifier: If P(x) is a predicate, then the statement “P(x) is true for atleast one value of x” (or) “There exists some element x in the domain such that P(x) is true” is called as an existential quantification. It is denoted by the symbol ∃ and expressed as ∃x P(x). Logical Equivalences of quantified predicates:​ ​ Statements involving predicates and quantifiers are logically equivalent if and only if they have the same truth value. ​ Use the notation S ≡ T to show logically equivalent statements involving predicates and quantifiers. Example: ∀x [P(x) ^ Q(x)] ≡ ∀x P(x) ^ ∀x Q(x) ​ Negating of quantified predicates: 1.​ The negation of a universally quantified predicate is equivalent to the existential of that predicate. 2.​ The negation of an existentially quantified predicate is equivalent to the universal of that predicate. Example: ~(∀x P(x)) ≡ ∃x ~P(x) and ~(∃x P(x)) ≡ ∀x ~P(x) ​ Rule Of Thumb for Negation: ​ When a negation sign “jumps over” a quantifier, the quantifier “flips.” Nested Quantifiers: Nested quantifiers occur when one quantifier is used within the scope of another quantifier in a logical expression. These quantifiers can be either universal (denoted by ∀) or existential (denoted by ∃). Note: The order of quantifiers is important in case of nested quantifiers. If two or more quantifiers are the same then the order doesn’t matter whereas in case of different quantifiers, the order is important. Examples: Rules Of Inference:​ We represent the rules of inference in the following way: Proof Methods: 1.​ Direct Proof: 2.​ Proof by contraposition/ Indirect proof: 3.​ Vacuous Proof: 4.​ Trivial Proof: Week 2 Cheat Sheet Proof Methodologies: Proof by contradiction: Consider the proposition/ statement to be p and consider the negation of it (i.e, ~p) and derive that ~p is false which implies that p is true.​ Existence Proof: 1.​ Constructive Existence Proof: we try to find a value from the domain such that the predicate P(x) is true. Here we consider an explicit example/ instance here.​ 2.​ Non-constructive Existence Proof: We try to prove the predicate P(x) without providing a specific example or algorithm for it.​ Proof By Cases: It a proof methodology used to demonstrate a statement/ predicate is true by splitting the problem into parts that are considered/ evaluated separately. Counter Example: To show that a statement of the form of a universally quantified predicate is false, we need only one example/ instance and it is called as a counter example for the predicate/ statement. Fallacy: ​ An instance of improper reasoning leading to an unexpected result that is false is called as a fallacy. Fallacies are usually encountered due to incorrect application of rules of logic. Mathematical Induction (Weak Induction): Mathematical Induction is a technique of proving a statement, theorem or formula which is thought to be true, for each and every natural number n. By generalizing this in form of a principle which we would use to prove any mathematical statement is ‘Principle of Mathematical Induction’. Method: Step 1: Check whether the given statement is true for n = 1. Step 2: Assume that given statement P(n) is also true for n = k, where k is any positive integer. Step 3: Prove that the result is true for P(k+1) for any positive integer k. If the above-mentioned conditions are satisfied, then it can be concluded that P(n) is true for all n natural numbers. Strong Induction: Proof by strong induction is a mathematical technique for proving universal generalizations. It differs from ordinary mathematical induction (also known as weak mathematical induction) with respect to the inductive step. In a weak mathematical induction, the inductive step involves showing that if some element n has a property, then the successor element n + 1 must also have that property. In a strong mathematical induction, the inductive step involves showing that if all elements up to and including n have some property, then n + 1 has that property as well. Method: Step 1: Basis Step: We prove that P(1) is true. Step 2: Induction Hypothesis: P(1) ^ P(2) ^ … ^ P(k) is true Step 3: Inductive Step: If P(1) ^ P(2) ^ … ^ P(k) is true for arbitrary k, We show that P(k + 1) is true. Sequences: A sequence is an ordered list of elements, each element of a sequence is called as a term. We use {an} to represent a sequence. Summation: It is the addition of terms of a sequence. We use the notation of ∑ to represent a summation. Arithmetic Progression: It is the sequence of the form a, a + d, a + 2d, …, a + nd Where, a -> first term d = a2 – a1 -> common difference. (difference of any two consecutive terms in the series) -​ nth term of an Arithmetic progression is given by an = a + (n – 1) d -​ The sum of n terms of an AP is calculated by ​ Sn = n/2 [2a + (n – 1) d] (or) n/2[ a + l] Where, n -> number of terms a -> first term d -> common difference. Sn -> Sum of n terms l -> Last term of the sequence. Harmonic Progression: A sequence of the form 1/a, 1/a + d, 1/ a + 2d, …. Is called as a Harmonic Progression. Note: If h1, h2, …, hn are the terms of a harmonic progression, then 1/h1, 1/h2, …. Form the terms of an Arithmetic Progression. Geometric Progression: A sequence of the form a, ar, ar2, ar3, … arn is called as a geometric progression. Where, a -> first term. r = a2/ a1-> common ratio (ratio of two consecutive terms in the sequence) The nth term of a GP is given by an = arn – 1 The sum of n terms of a GP is given by Sn = a(1 – rn)/(1 – r) (if 1 > r) Sn = a (rn - 1)/(r - 1) (if 1 < r) The sum of infinite terms of a GP is given by a/1 - r Week 3 Reading Material on Set Theory Set: A set is an unordered collection of objects. The objects here can be thought of numbers, names, matrices, things, etc…​ Elements: The objects present in a set are called as elements/ members of the set.​ Representation of Sets:​ Roster form: It is the form of representing a set that lists all the elements in the set separated by commas. Example: Set of all whole numbers W = {0, 1, 2, 3, …} Set Builder form: ​ It is the form of representing a set by writing a property that the members of the set must satisfy. Example: Set of all perfect squares less than or equal to100 W= {x | x ∈ N and x2 < = 100}​ Some Standard sets:​ 1.​ N = set of all natural numbers = {1, 2, 3, …} 2.​ Z = Set of all integers = {…, -3, -2, -1, 0, 1, 2, …} 3.​ Q = Set of all Rational numbers = {p/q | p ∈ Z, q ∈ Z, q ≠ 0} 4.​ W = Set of all whole numbers = {0, 1, 2, …} Subset of a set:​ ​ If A and B are two sets such that every element of A is also an element of B, then A is called as a subset of B. -​ We use the symbol A ⊆ B to represent “A is a subset of B” -​ We can represent subset using predicates using quantifiers. ⇨​ A ⊆ B (∀ x (x ∈ A -> x ∈ B) ^ (∃x (x ∈ B ^ x ∉ A))​ (read as for all x if x belongs to A then x belongs to B and there exists some x for which they belong to B but not A). Venn diagram: Example: Let, A = {Apple, Banana}​ B = {All fruits} Here, A ⊆ B Note: Every set is a subset of itself. Superset of a set:​ Suppose A and B are two sets and every element of A is also an element of B, then B is called as a superset of A. -​ It is symbolically represented as B ⊇ A Example:​ ​ Let, A = {Apple, Banana}​ B = {All fruits} Here, B ⊇ A Proper subset of a set:​ ​ A proper subset of A is a subset of A that is not equal to A. It does not contain all the elements present in A. Example: A = {2, 4} and B = {2, 4, 6, 8}​ Here, we can say that A is a proper subset of B = > A ⊂ B.​ Equality of sets: Two sets A and B are equal, if they contain the same elements. ⇨​ A ⊆ B and B ⊆ A (A is a subset of B and B is subset of A). Example: A = {1, 2, 3} and B = {1, 2, 3} Here, A ⊆ B and B ⊆ A ⇨​ A = B Types of sets:​ 1.​ Empty set: An empty set is a set with no elements in it.​ Empty set is denoted by {} or Φ. ​ Example: ​ S = {x ∈ R; x2 < 0} Note: Always remember that Φ and {Φ} are not the same because Φ represents an empty set whereas {Φ} contains the element phi Φ. 2.​ Singleton sets: The set which has just one element in it is called as a singleton set. Example: ​ A = {9} 3.​ Finite sets: If a set has finite number of elements, then it is called as a finite set. The elements can be estimated here. ​ Example: ​ A = {2, 3, 4, 8}​ 4.​ Infinite sets:​ If a set has infinite number of elements in it, then it is called as an infinite set. The elements here cannot be estimated. Example: Set of all the Natural numbers (N). Theorem: An empty set is a subset of every set. ​ Proof: Let us try and prove this using the method of proof by contradiction. Let us consider that Φ ∉A ⇨​ There must be at least one value x such that it belongs to Φ but does not belong to A. ⇨​ But if we closely look at this, Φ doesn’t contain any elements in it. ⇨​ This is a contradiction. ⇨​ This contradiction has arisen due to our incorrect assumption that Φ ∉A ⇨​ Φ ⊆ A ​ Therefore, an empty set is a subset of any set.​ Power Set: ​ The power set of a set S, is the set of all subsets of the set S. It is denoted by P(s) Example: ​ Consider the following set S = {2, 3, 5} Then P(S) = {Φ, {2}, {3}, {5}, {2,3}, {3,5}, {2, 5}, {2, 3, 5}}. ⇨​ Note: For a set containing ‘n’ elements, the power set would contain 2n elements in it. Cardinality of a set: ​ ​ The cardinality of a set is the number of distinct elements present in the set. |S| denotes the cardinality of a set S. ​ Example: ​ ​ 1. Let, S = {9, 21, 19, 7} Then the cardinality of the set S = |S| = 4. 2.​ Let, S = {n ∈ N: n is a prime and n< 10} = {2, 3, 5, 7} ⇨​ Cardinality of S = |S| = 4.​ 3.​ |empty set| = 0 4.​ |empty set, {empty set}| = 2 5.​ |{x}| = 1. 6.​ | {x, {x}, {x, {x}}}| = 3 Ordered n – tuple: ​ The ordered n – tuple is defined as an ordered collection of n elements a1, a2, …, an. The ordered 3-tuple is (3, 5, 2) where 3 is the first element, 5 is the second, and 2 is the third. Note that (3, 5, 2) is not the same as (5, 2, 3). They both are different.​ Cartesian product: ​ ​ Let S and T be two sets, the cartesian product of S and T is defined as:​ ​ S x T = {(a,b) | a ∈ S and b ∈ T} Example: S = {1, 2, 3} and T = {p, q} ⇨​ S x T = { (1, p), (1, q), (2, p), (2, q), (3, p), (3, q)} ⇨​ T x S = {(p, 1), (p, 2), (p, 3), (q, 1), (q, 2), (q, 3)} Theorem: If S and T are finite sets, then |S x T| = |S| x |T| Proof: ​ ​ Case 1: Let S and T be nonempty sets. Let S = {x1, x2, …, xm}and T = {y1, y2, …, yn} SxT=​ {​ (x1, y1), (x1, y2), …, (x1, yn), ---- n elements ​ (x2, y1), (x2, y2),..., (x2, yn), ----- n elements​ ​ … ​ … ​ (xm, y1), (xm, y2), …, (xm, yn)} ----- n elements } Here, there are n elements m times. ⇨​ The total number of elements in S x T is mn or equivalently |S x T| = mn. Case 2: Let S = {x1, x2, …, xm}and T be an empty set ⇨​ There will be no tuple in S x T ⇨​ Since, for any element say x1, there is no element in T to be paired with. ⇨​ Hence S x T = Φ, and thus |S x T| = 0 = |S| x |T| since, |S| = m and |T| = 0 Case 3: ​ S is empty set and T = {y1, y2, …, yn} ⇨​ This is the similar case as that of the previous case. Case 4: Let S and T be empty sets. SxT=Φ ⇨​ |S x T| = |S| x |T| = 0 Example: Cartesian product of sets S1, S2, …, Sn S1, S2, S3, …, Sn = {(t1, t2, …, tn) | t1 ∈ S1, t2 ∈ s2, …, tn ∈ Sn} S = {1, 2, 3}, T = {x, y}, and V = {a, b} S x T x V = {(1, x, a), (1, y, a), (2, x, a), (2, y, a), (3, x, a), (3, y, a), (1, x, b), (1, y, b), (2,x, b), (2, y, b), (3, x, b), (3, y, b)} Set Operations: Venn Diagrams: Graphical/ diagram representation of sets as circles enclosed with a rectangle. Here, U is called as a Universal set.​ ​ Universal set: A set which contains all the elements of the other sets is called as a universal set.​ ​ Let us consider the example of a university where in students can opt for one or more than one major from the available three majors: 1.​ Computer Science. 2.​ Business. 3.​ Mathematics. Say the students enrolled in CS are denoted by the set C = {Rahul, Ram, Sonu, Monu} B = {Rahul, Lucky, Rinky, Pinky} M = {Rahul, Lucky, Mohan, Shyam} If we had to represent this scenario in terms of a venn diagram, then it would look (Draw the venn diagram in the session). 1.​ The union Operator: ​ For the sets A and B, their union is given by A U B. It is the set containing all the ​ elements that are either in A or in B or in both. ​ Union is mathematically denoted as : ∀ A, B: A U B = {x | x ∈ A V x ∈ B}. ​ Venn Diagram: Example: ​ The union of the sets computer science and Business in the previous example is: ⇨​ C U B = {Ram, Sonu, Monu, Rahul, Lucky, Rinky, Pinky}. ​ Note: ​ ∀ A, B: (A U B ⊇ A) ^ (A U B ⊇ B) It means that for all A and B, A U B is the superset of both A and B. 2.​ The intersection operator: ​ For sets A and B, their intersection A ∩ B is the set containing all the elements that ​ are in both A and B. We express this as ∀ A, B: A ∩ B = {x | x ∈ A ^ x ∈ B}.​ ​ ​ Example: In the above example, if we consider the intersection of students enrolled in business and mathematics, ⇨​ M ∩ B = {Rahul, Lucky} ⇨​ M ∩ B ∩ C = {Rahul}​ Note: A ∩ B is a subset of both A and B. In fact, it is the largest subset. 3.​ Disjoint Sets: ​ If A and B are two sets and their intersection is empty. i.e, they do not have any ​ common elements, then the two sets are called as disjoint sets. ​ Example: A = {1, 2, 3} B = {5, 6, 7} ​ Here A ∩ B = Φ, since A and B do not have any common elements. 4.​ Difference of Sets: ​ For two sets A and B, the difference of A and B is written as A – B. It is the set of ​ elements that are present in A but not B. ​ We express it as: ​ A – B = {x | x ∈ A ^ x ∉ B} ​ = {x | ~(x ∈ A -> x ∈ B)} ​ It is also called as complement of B with respect to A. Venn Diagram: Example: A = {1, 2, 3, 4, 5} and B = {3, 4, 5, 6, 7} A – B = {1, 2} Inclusion-exclusion principle: Theorem: Let A and B be two finite sets. Then |A U B| = |A| + |B| - |A ∩ B|. Proof: Suppose |A ∩ B| = r, |A – B| = p and |B – A| = q |A| = |A-B| + |A ∩ B| = p + r. |B| = |B – A| + |A ∩ B| = q + r. |A U B| = |A – B| + |B – A| + |A ∩ B| ​ =p+q+r Adding and subtracting r on both sides, ⇨​ |A U B| = p + q + r + r – r ⇨​ ​ = (p + r) + (q + r) – r ⇨​ ​ = |A| + |B| - |A ∩ B| Therefore, we can arrive to the conclusion that |A U B| = |A| + |B| - |A ∩ B|. Complement of a set:​ We can define the complement of the set Ā = U – A. Example: if U = N (Natural numbers) The complement of the set {3, 5} = {0, 1, 2, 4, 6, 7, …} Venn diagram: Set Identities: 1.​ Identity Laws: -​ A U Φ = A -​ A ∩ U = A 2.​ Domination Laws: -​ A u U = U -​ A ∩ Φ = Φ 3.​ Idempotent Laws: -​ A U A = A -​ A ∩ A = A​ 4.​ Double complement: -​ (A')' = A​ 5.​ Commutative Laws: -​ A U B = B U A -​ A ∩ B = B ∩ A 6.​ Associative Laws: -​ A U ( B U C) = (A U B) U C -​ A ∩ (B ∩ C) = (A ∩ B) ∩ C 7.​ Distributive Laws: -​ A U (B ∩ C) = (A U B) ∩ (A U C) -​ A ∩ (B U C) = (A ∩ B) U (A ∩ C)​ 8.​ De Morgan’s Laws: -​ (A U B)' = A' ∩ B' -​ (A ∩ B)' = A' U B' Methods to prove set Identities:​ 1.​ By using the subset method. 2.​ By using membership table. 3.​ By using the other known set identities. Description Method Subset method Show that each side of the identity is a subset of the other side. Membership Table For each possible combination of the atomic sets, show that an element in exactly these atomic sets must either belong to both sides or belong to neither side. Apply existing identities Start with one side, transform it into the other side using a sequence of steps by applying an established identity. Example: 1.​ Show that if A ⊆ B and B ⊆ C, then A ⊆ C (using subset method) Sol: Assume A ⊆ B and B ⊆ C is given Let x ∊ A be true Since A ⊆ B, if follows that x ∊ B And since B ⊆ C it follows that x ∊ C Therefore, for every x, if x ∊ A then x ∊ C 2.​ Use a membership table to show that A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C). Sol: Table 2 : A Membership Table for the distributive Property A B C B∪C A ∩ (B ∪ C) A ∩ B A ∩ C (A ∩ B) ∪ (A ∩ C). 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 Here, we can clearly see that the values in Column 5 and Column 8 are the same and therefore, this proves the equality. 3.​ Use set builder notation and logical equivalences to establish the first De Morgan law (A ∩ B)| = A| ∪ B|. (using set identities) Solution: We can prove this identity with the following steps. Generalized Union: Binary union operation is A U B N – ary grouping: A1 U A2 U A3 … U An It is represented by the notation: 𝑛 ⋃𝑖=1𝐴𝑖 This is also called as a “Big U” notation. ​ Example:​ ​ For i = 1, 2, …, let Ai = {i, i + 1, i + 2, …}. Then, A1 = {1} A2 = {1, 2} A3 = {1, 2, 3} And so on… If we consider the Generalized union upon Ai, then Note: ​ Here, the grouping and order is irrelevant. It doesn’t matter in which order we group the sets​ Generalized Intersection: Binary intersection operation: A ∩ B N- ary intersection: A1 ∩ A2 ∩… ∩ An It is represented by the notation: 𝑛 ⋂𝑖=1𝐴𝑖 This is called as a “Big Arch” notation. ​ Example:​ ​ For i = 1, 2, …, let Ai = {i, i + 1, i + 2, …}. Then, A1 = {1} A2 = {1, 2} A3 = {1, 2, 3} And so on… If we consider the Generalized intersection upon Ai, then it would be Note: Here, the grouping and order is irrelevant. It doesn’t matter in which order we group the sets.​ Representing sets with Bit Strings: We follow the following notion or steps while representing a sets using Bit Strings: 1.​ Impose an order on the elements of universal sets 2.​ Arrange elements in the right- to-left order in the bit string. 3.​ If an element is present in the set, we assign the bit value = 1 and if not present, we assign the bit value = 0 Example: ​ Set operations using Bit Strings: Examples: 1.​ 2. 3. 1) Relations A.​ Introduction to Relations: a.​ Binary Relation: A binary relation R from a set A to B is a subset of A x B such that R ⊆ A x B Consider, (x, y) ∈ R -> x is related to y by the relation R. (x, y) ∉ R -> x is not related to y by the relation R. ​ Notation: x R y Note: 1.​ Domain of R Notation: dom (R) It is the set of all first elements in (a, b) of R 2.​ Range of R Notation: range (R) It is the set of all second elements of (a, b) in R. Note: ​ By definition, any subset of A x B is a relation and we know that { } is a subset of A x B Hence, { } is a relation. 2. Complementary Relations: Let R be any binary relation from A to B. I.e, R ⊆ A x B. Then the complement of the relation R is the binary relation defined by 𝑅 𝑅 = {(a, b) | (a, b) ∉ R} = (A x B) – R 3.​ Inverse Relation: ​ Any binary relation R: A x B has an inverse relation R-1 : B x A defined by R-1 = {(b, a) ∈ B x A: (a, b) ∈ R} 4.​ Relation on a set: ​ A binary relation from a set A to itself is called as a relation on the set A. R⊆AxA 2) Properties of Relations A.​ Reflexive property: A relation R on a set A is called reflexive if for every element x in A, if there exists (x, x) in R. B.​ Symmetric property: A relation R on a set A is called symmetric if for every (x, y) in R, there is a (y, x) in R for all values of x, y in A. C.​ Antisymmetric Relation: A relation R on set A is called antisymmetric if (x, y) ∈ R then (y, x) ∉ R for all distinct x and y in A D.​ Transitive Relation: A relation R on set A is called Transitive if for all (x, y) ∈ R and (y, z) ∈ R then (x, z) ∈ R.​ 3) Composite Relations A Combining relations: Let A and B be two sets, a binary relation from A to B is defined as a subset of A x B. There are many relations from A to B. Let R1 and R2 be two relations from A to B such that R1 ⊆ A x B and R2 ⊆ A x B. We can define the operators like U, ∩ , - on the relations from A to B. B. Composite Relations: Let R be a relation from A to B and S be a relation from B to C Then the composite relation SoR is defined as: SoR = {(x, z) ∈ A x C | There exists y ∈ B such that (x, y) ∈ R and (y, z) ∈ S} Note: Let R be a relation on the set A i.e; R ⊆ A x A. Then the powers Rn for n = 1, 2, 3, … are defined as follows: -​ R2 = RoR -​ R3 = (RoR)oR (or) R2oR and so on… 4) N – ary Relations & Representation of Relations A.​ N – ary Relation: An n- ary relation R on the sets A1, A2, …, An is written as R: A1 x A2 x… x An and it is simply a subset R ⊆ A1 x A2 x … x An Where, Ai -> domains of R n -> Degree of the relation R B.​ Functional Relation: An n-ary relation R on sets A1, …, An in the domain of Ai is called as a functional relation if it contains at most one tuple (…xi…) for any value of xi from the domain Ai C.​ Representation of Relations: The binary relation R: A x B can be represented by a 0 – 1 matrix.​ Note: 1.​ If in the matrix notation of a Relation, all the principal diagonal elements are ‘1’. Then the relation is a reflexive relation. * * * 1 * 1 * * * * 1 * * * * 1 2.​ If the elements across the diagonal are identical, then the relation is a Symmetric relation 1 * * * 1 * 0 * * 0 * * * * * * 3.​ If the elements across the diagonal are of the opposite value or if the elements across the diagonal are zero, then the relation is Antisymmetric 0 * * * 1 * 0 1 * 0 * * * 0 * * D. Closure of a Relation: For any property X, the “X closure” of a set S is defined as the smallest superset of S that has the given property. Reflexive Closure: The reflexive closure of a relation R on A is obtained by adding (a, a) to R for each a ∈A. i.e., it is R U IA Symmetric Closure: The Symmetric Closure of R is obtained by adding (b, a) to R for each (a, b) in R i.e., it is R U R-1 Transitive Closure: The transitive closure or connectivity relation on R is obtained by repeatedly adding (a, c) to R for each (a, b) and (b, c) in R 5) Applications of Relations A.​ Databases & Relations: Databases like SQL consists of records of different entities stored in the form of tables that are logically connected with the help of relations and this determines how they interact with each other.​ B.​ Selection operator: Let R be any n-ary relation and C be a condition that elements in R may satisfy. Then the selection operator Sc maps the n-ary relation R to the n-ary relation of all n-tuples that satisfy the condition C. C.​ Projection operator: Let A = A1 x …. X An be any n-ary domain, and let {ik} = {i1, …, im} be a sequence of indices all falling in the range 1 to n That is, where 1 = 3. ​ Connecting this new vertex to each of the n vertices in Cn by new edges. Note: Local area networks (LANs) based on a hybrid topology are ones where messages may be sent around the ring, or through a central device. Local area networks with this redundancy can be modeled using wheels Wn n-Cubes: An n-dimensional hypercube, or n-cube (Qn), is a graph with vertices representing the 2n-bit strings of length n. Two vertices in the graph are adjacent if and only if the bit strings that they represent differ in exactly one-bit position. 9) Bipartite graphs and Matchings: A)​Bipartite Graph: A simple graph G is called bipartite if its vertex set V can be partitioned into two disjoint sets V1 and V2 such that every edge in the graph connects a vertes in V1 and a vertex in V2 (i.e., no edge in G connects either two vertices in V1 or V2). When this condition is satisfied, we call the pair (V1, V2) a bipartition of the vertex set V of G. Example: 1.​ C6 is Bipartite! ​ 2.​ Km , for m>3 is not bipartite!​ B) Complete Bipartite Graphs: A complete bipartite graph Km, n is a graph that has its vertex set partitioned into two subsets of m and n vertices, respectively, with an edge between two vertices if and only if one vertex is in the first subset and the other vertex is in the second subset. 10) Matching:​ A)​Definition of Matching: A matching M in a simple graph G = (V, E) is a subset of the set E of edges of the graph, such that no two edges are incident with the same vertex. A vertex that is the endpoint of an edge of a matching M is said to be matched in M; otherwise, it is said to be unmatched. A maximum matching is a matching with the largest number of edges. B)​ Complete Matching of a bipartite graph: A matching M in a bipartite graph G = (V, E) with bipartition (V1, V2)) is a complete matching from V1 to V2 if every vertex in V1 is the endpoint of an edge in the matching, or equivalently, if |M| = |V1| 11) Isomorphism of Graphs: C6 is bipartite : A)​Graph Isomorphism: ​ It comes from the Greek roots isos for “equal” and morphe for “form”. ​ When two simple graphs are isomorphic, there is a one-to-one correspondence between the vertices of the two graphs that preserves the adjacency relationship.​ B)​ Definitions:​ ​ The simple graphs G1 = (V1, E1) and G2 = (V2, E2) are isomorphic if there exists a bijection f from V1to V2 with the property that a and b are adjacent in G1 if and only if f(a) and f(b) are adjacent in G2, for all a and b in V1 ​ Such a function f is called an isomorphism. ​ Two simple graphs that are not isomorphic are called nonisomorphic. C)​How to check if two graphs are isomorphic? ​ Brute force: try all possible bijections between the vertex sets. ​ If you have 2 graphs with n vertices, then there are n! different bijections between them. Showing that two graphs are not isomorphic ​ A property preserved by an isomorphism of graphs is called a graph invariant. Ex: no. of vertices, edges, degree sequence. ​ If two graphs are isomorphic, then a graph invariant is identical for both of them. ​ Equivalently, if a graph invariant is not identical for two graphs, they are not isomorphic. ​ Example: If two graphs don’t have the same number of vertices, they cannot be isomorphic. ​ However, two graphs may have an identical graph invariant and yet not be isomorphic. 1)​Graph Connectivity: Paths, Cycles, & Simple paths​ A)​Definition of a Path/ Walk: A path or a walk is a sequence of edges that begins at a vertex of a graph and travels from vertex to vertex along the edges of the graph. Paths can correspond to whether two vertices are connected or not. We need paths to understand some problems involving the notion of shortest path. Example: 1.​ We may need shortest path between two places while traveling. ​ Path 1​ ​ ​ Path 2 2.​ We also need paths to understand collaboration graphs and Erdős number. B) Definitions for Undirected Graphs: -​ Suppose n is a non negative integer and G is an undirected graph. -​ A path of length n from u to v in G is a sequence of n edges e1, e2, …, en for which there exists a sequence x0 = u, x1, x2, …, xn-1, xn = v of vertices such that ei has for I = 1, 2, 3, …, n the endpoints xi-1 and xi. -​ When the graph is simple, we denote this path by its vertex sequence x0, x1, x2, …, xn (because listing these vertices uniquely determines the path). -​ The path is a circuit or a cycle if it begins and ends at the same vertex that is if u = v and has the length greater than zero. -​ The path or circuit is said to pass through the vertices x1, x2, …, xn-1 or traverse the edges e1, e2, …, en. -​ A path or circuit is simple if it does not contain the same edge more than once.​ C) Definitions for Directed Graphs: -​ Let n be a nonnegative integer and G be a directed graph. -​ A path of length n u to v in G is a sequence of edges e1, e2, …, en of G such that e1 is associated with (x0, x1), e2 is associated with (x1, x2) and so on with en associated with (xn-1, xn) where x0 = u and xn = v. -​ When there are no multiple edges in the directed graph, this path is denoted by its vertex sequence x1, x2, …, xn-1, xn -​ A path of length greater than zero that begins and ends at the same vertex is called as a circuit or a cycle. -​ A path or circuit is called simple if it does not contain the same edge more than once. 2) Connectedness in Undirected Graph An undirected graph is called connected if there is a path between every pair of distinct vertices of the graph. An undirected graph that is not connected is called disconnected. We can disconnect a graph when we remove vertices/ edges/ or both to produce a disconnected graph. A) Connected Components: A connected component of graph G is a connected subgraph of G that is not a proper subgraph of another connected subgraph of G. A connected component of a graph G is a maximal connected subgraph of G. A graph G that is not connected has two or more connected components that are disjoint and have G as their union. ​ B) Cut Vertices & Cut edges: Sometimes the removal from a graph of a vertex and all incident edges produces a subgraph with more connected components. Such vertices are called cut vertices/ articulation points. Similarly, an edge whose removal produces a graph with more connected components than the original is called a cut edge/ bridge. Example: In a computer network, a cut vertex, and a cut edge represent an essential router and an essential link that cannot fail for all computers to communicate. C) Vertex connectivity: ​ Connected graphs without cut vertices are called nonseperable graphs. ​ They can be considered more connected than those with a cut vertex. For instance, Kn ​ A subset V’ of the vertex set V of G = (V, E) is a vertex cut or separating set if G - V’ is disconnected. ​ Vertex connectivity of a noncomplete graph G denoted by k(G) is the minimum number of vertices in a vertex cut. ​ A graph is k-connected or k-vertex-connected, if k(G) >= k​ D) Edge Connectivity: ​ A set of edges E’ is called an edge cut of G if the subgraph of G - E’ is disconnected. ​ The edge connectivity of graph G, denoted by λ(G) is the minimum number of edges in an edge cut of G. 3) Connectedness in Directed Graph In a directed graph, you can go from vertex a to b yet not be able to go from vertex b to vertex a. ​ How can we define the notion of connectivity? A directed graph is strongly connected if there is a path from a to b and from b to a whenever a and b are vertices in graph G. A directed graph is weakly connected if there is a path between every two vertices in the underlying undirected graph. A weakly connected but not strongly connected graph still appears to be in “one piece” on visualization. Any strongly connected graph is also weakly connected but the opposite is not necessarily true. Example: A)​Strongly Connected Components: The subgraphs of a directed graph G that are strongly connected but not contained in larger strongly connected subgraphs, that is the maximal strongly connected subgraphss are called the strongly connected components or strong components of G. Note: ​ If a and b are two vertices in a directed graph, their strong components are either the same or disjoint. Example: H has three strongly connected components, consisting: ​ The vertex a ​ The vertex e The subgraph consisting : ​ The vertices b, c, and d ​ The edges (b, c), (c, d), and (d, b) ​ 4) Euler Paths and Circuits A)​Euler Circuit: An Euler circuit in a graph G is a simple circuit containing every edge of G. B)​ Euler path: An Euler path in G is a simple graph containing every edge of G. The graph G1 has an Euler circuit, for example, a, e, c, d, e, b, a. Neither of the graphs G2 or G3 has an Euler circuit. However, G3 has an Euler path, namely, a, c, d, e, b, d, a, b. G2 does not have an Euler path. C) Conditions for Euler Circuits/ Paths to exist: Theorem: A connected multigraph with at least two vertices has an Euler circuit if and only if each of its vertices has an even degree. Theorem: A connected multigraph has an Euler path but not an Euler circuit if and only if it has exactly two vertices of odd degree. D) Applications of Euler Circuits & Paths: ​ ​ Many puzzles ask you to draw a picture continuously without lifting a pencil so that no part of the picture is retraced. We can solve such puzzles using Euler circuits & Paths. ​ Many applications ask for a path or circuit that traverses exactly once: a.​ Each street in a neighborhood; a.​ Each road in a transportation network. a.​ Each connection in a utility grid. a.​ Each link in a communication network. ​ If a postman can find an Euler path in the graph that represents the streets, the postman needs to cover, this path produces a route that traverses each street of the route exactly once. ​ Other fields of applications include: a.​ In the layout of circuits. a.​ In network multicasting. a.​ In molecular biology for sequencing of DNA. ​ 5) Hamilton Paths & Circuits A)​Hamilton Circuit: A simple circuit in a graph G that passes through every vertex exactly once is called as Hamilton Circuit. B)​ Hamilton Path: A simple path in a graph G that passes through every vertex exactly once is called as a Hamilton path. That is the simple path x0, x1, …, xn-1, xn in the graph G = (V, E) is a hamilton path if V = {x0, x1, …, xn-1, xn } and xi ≠ xj for 0 ≤ i < j ≤ n. The simple circuit x0, x1, …, xn-1, xn, x0 is a Hamilton path. ​ G1 has a Hamilton circuit: a, b, c, d, e, a. ​ There is no Hamilton circuit in G2. (Note: any circuit containing every vertex must contain the edge {a, b} twice) ​ G2 has a Hamilton path: a, b, c, d ​ G3 has neither a Hamilton circuit nor a Hamilton path because any path containing all vertices must contain one of the edges {a, b}, {e, f}, and {c, d} more than once. C)​Conditions for the existence of Hamilton Circuits/ Paths: There are no known simple necessary and sufficient conditions for the existence of Hamilton Circuits. Some separate necessary and sufficient conditions are known. Ex: for a necessary condition for a Hamilton circuit: a.​ No vertex can have degree one. a.​ A graph with a vertex of degree one cannot have a Hamilton circuit because, in a Hamilton circuit, each vertex is incident with two edges in the circuit. D) Dirac’s Theorem: If G is a simple graph with n vertices with n ≥ 3 such that the degree of every vertex in G is at least n/2, then G has a Hamilton circuit. E) NP - Hardness of Hamilton Circuit: The best algorithms known for finding a Hamilton circuit in a graph or determining that no such circuit exists have exponential worst-case time complexity (in the number of vertices of the graph). NP-hard problems do not have an efficient algorithm, though not proved. F) Applications: Traveling sales person problem (TSP) TSP reduces to finding the Hamilton circuit in a complete graph such that the total weight of its edges is as small as possible. 5) Planar Graphs and Euler’s Formula A)​Planar Graphs: ​ A graph is said to be planar if it can be drawn in the plane without any edges crossing. ​ A crossing of edges is the intersection of the lines or arcs representing them at a point other than their common endpoint. ​ Such a drawing is called a planar representation of the graph. ​ Thus, while a graph may be drawn with edges crossing, it may still have a planar representation. B)​ Planar graphs & Regions: A planar representation of a graph splits the plane into regions, including an unbounded region. Is there a relationship between the no. of regions and the no. of vertices and edges in a planar graph? C)​Euler’s formula: Let G be a connected planar simple graph with e edges and v vertices. Let r be the number of regions in a planar representation of G. Then r=e-v+2 Example: Consider ​ ​ Applications of Planar Graphs: ​ Design of road networks. ​ Design of electronic circuits.​ 6) Graph Coloring Coloring in the graph is used for better readability and distinguishing different elements of a graph. How many colors are required to color a map/ graph? The answer is Four! Let us consider the following to understand this better: A)​Dual Graphs: Coloring: it is the assignment of a color to each vertex of a simple graph. ​ Color is assigned such that no two adjacent vertices are assigned the same color. Chromatic number: It is the least number of colors needed for coloring a graph. The chromatic number of a graph G is denoted by χ (G) (χ is the Greek letter Chi) B)​ Graph Coloring: The four-color theorem: ​ The chromatic number of a planar graph is no greater than four. ​ People trying to prove it had many incorrect attempts. ​ Proof required computer arithmetic to verify the cases. ​ Used only for a planar graph. Example: What is the chromatic number of Kn? The chromatic number of Kn is n Consider the Kn If any pair of vertices are colored the same, then the coloring will be illegitimate. This is because the two vertices have an edge between them by virtue of the graph being a complete graph. The chromatic number of Kn is n for n >= 3. What is the Chromatic number of Km, n ? The chromatic number of Km,n is two. A complete bipartite graph can be colored with only two colors. What is the chromatic number of C6 ? The chromatic number is 2 What is the chromatic number of C5 ? The chromatic number is 3 C)​Applications of Graph Coloring: Scheduling final exams: Challenge: ​ Assign a time slot for each exam so that no student enrolled in two courses has to miss a final exam. Steps to model this as a graph question: ​ Have a graph with a vertex for each course. ​ Have an edge between two vertices if there is at least one common student enrolled in both courses. ​ The chromatic number (or any legitimate graph coloring) of the graph will be the minimum number of time slots needed to schedule the final exams. 1)​Trees and Spanning Trees: A) Trees: If we informally try to define a tree, it is a graph that does not have cycles. How can we understand the notion of trees? 1.​ Think of a hierarchy: B) Game Trees: A game tree is a graph representing all possible game states and the transitions between them for a certain game. Similar decision trees can be made for games like chess. The trees for chess are very large due to multiple pieces and moves. It is difficult to keep this tree in memory. Similar decision trees can be made for games like chess. The trees for chess are huge due to multiple pieces and moves. It is difficult to keep this tree in memory. Lot of work has been done in cases where you are in a given game state to have heuristics about the next move. Fact! “ IBM Deep Blue used these techniques to beat the best chess player in the world, Garry Kasparov in 1997.” 1.​ We can also think of trees in chemistry: This is an example of adjacent node lists and Carbon position lists. 0.​ Computer File Systems: Example of Unix File System C) Applications of Trees:​ 1.​ They are used in Data Structures: to organize data in ways to access and manipulate it efficiently. ​ Binary Search Trees. ​ Huffman Codes ​ Prefix trees/ tries. 2) Trees & Rooted Trees: A)​Definitions: ​ A tree is connected undirected graph with no simple circuits. ​ Graphs containing no simple circuits with no connections are called forests. Here, each of their components is a tree. ​ Simple circuits are also called cycles. ​ A tree cannot have a simple circuit and it cannot have multiple edges/ loops. Therefore, any tree must be a simple graph.​ Theorem: An undirected graphs is a tree if and only if there is a unique simple path between any two of its vertices. B)​ Rooted Trees:​ ​ A rooted tree is a tree in which one vertex has been designated as the root and every edge is directed away from the root. ​ A rooted tree is therefore a directed graph. Example: This is the tree rooted at A. Terminology Related to Rooted Trees: Suppose T is a rooted tree: 1.​ If v is a vertex in T other than the root, the parent of v is the unique vertex u such that there is a directed edge from u to v. 2.​ When u is the parent of v, v is called a child of u. 3.​ Vertices with the same parent are called siblings. 4.​ The ancestors of a vertex other than the root are the vertices in the path from the root to this vertex, excluding the vertex itself and including the root (that is, its parent, its parent’s parent, and so on, until the root is reached). 5.​ The descendants of a vertex v are those vertices that have v as an ancestor. 6.​ A vertex of a rooted tree is called a leaf if it has no children. 7.​ vertices that have children are called internal vertices. 8.​ The root is an internal vertex unless it is the only vertex in the graph, in which case it is a leaf. 9.​ If a is a vertex in a tree, the subtree with a as its root is the subgraph of the tree consisting of a and its descendants and all edges incident to these descendants. 0.​ A rooted tree is called an m-ary tree if every internal vertex has no more than m children. 0.​ The tree is called a full m-ary tree if every internal vertex has exactly m children. 0.​ An m-ary tree with m = 2 is called a binary tree. ​ 0.​ An ordered rooted tree is a rooted tree where the children of each internal vertex are ordered. 0.​ Ordered rooted trees are drawn so that the children of each internal vertex are shown in order from left to right. 0.​ Note that a representation of a rooted tree, in a conventional way, determines the order of its edges. Note: Such orderings of edges will be used in drawings without explicit mention that a rooted tree to be ordered is being considered.​ 0.​ In an ordered binary tree (usually called just a binary tree), if an internal vertex has two childer: ​ The first child is called the left child. ​ The second child is called the right child. 0.​ The tree rooted at the left child of a vertex is called the left subtree of this vertex. 0.​ The tree rooted at the right child of a vertex is called the right subtree of the vertex. ​ 3) Properties of Trees: Theorem 1: A tree with n vertices has n - 1 edges. Theorem 2: A full m-ary tree with i internal vertices contains n = mi + 1 vertices. 4) Spanning Trees: Let G be a simple graph. A spanning tree of G is a subgraph of G that is a tree containing every vertex of G. A simple graph with a spanning tree must be connected, because there is a path in the spanning tree between any two vertices. Consider the following simple Graph G: This graph is connected and has some cycles. To construct a spanning tree, find cycles and remove edges so that cycles are no longer present. Step 1: Remove the edge {a, e} Step 2: Remove the edge {e, f} Step 3: Remove the edge {c, g} A)​Weighted Graphs: A graph with a number assigned to each edge. The assigned number is the weight of the corresponding edge. B)​ Spanning Trees: A spanning tree is a subset of Graph G, such that all the vertices are connected using minimum possible number of edges. Hence, a spanning tree does not have cycles and a graph may have more than one spanning tree. C)​Minimum spanning tree: A minimum spanning tree (MST) is defined as the spanning tree that has the minimum sum of weights of the edges among all the possible spanning trees. 5) Prim’s Algorithm:​ ​ Given a weighted connected graph, Prim’s algorithm determines a minimum spanning tree of that graph. ​ It does this by successively “growing the tree”, starting from a minimum weight edge.​ Steps Involved in finding a Minimum spanning tree using Prim’s algorithm: 1.​ Begin by choosing any edge with the smallest weight, and put it into the spanning tree. 2.​ Consider edges from the graph that are incident to a vertex already in the tree. 3.​ Choose a minimum-weight edge that does not form a simple circuit with those edges already in the tree. 4.​ Go to step 2, unless n - 1 edges have been added. Example: Consider the following graph as an example, for which we need to find the Minimum Spanning Tree (MST): Step 1: Consider an arbitrary vertex that acts as the starting vertex of the MST. Here, we have considered the vertex 0 as the starting vertex. Step 2: Here, the edges connecting the vertex 0 are {0, 1} and {0, 7} out of the incident edges, the least weight is on the edge {0, 1} => we consider vertex 1 in the MST. Step 3: Here, we need to consider the edges incident on vertex 0 and vertex 1 => Edges connecting vertex 0: {0, 7} => Edges connecting vertex 1: {1, 2}, {1, 7} out of the available edges, The edge {0, 7} has the least weight => Include vertex 7 in MST. Step 4: The edges that connect the incomplete MST with the vertices are {1, 2}, {7, 6} and {7, 8}. Add the edge {7, 6} and the vertex 6 is taken in the MST as it has the least weight. Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5} and vertex 5 in the MST as the edge has the minimum weight (i.e., 2) among them. Step 6: Among the current connecting edges, the edge {5, 2} has the minimum weight. So include that edge and the vertex 2 in the MST. Step 7: The connecting edges between the incomplete MST and the other edges are {2, 8}, {2, 3}, {5, 3} and {5, 4}. The edge with minimum weight is edge {2, 8} which has weight 2. So include this edge and vertex 8 in the MST. Step 8: Here the edges {7, 8} and {2, 3} both have the same minimum weight. But 7 is already part of MST. So we will consider the edge {2, 3} and include that edge and vertex 3 in the MST. Step 9: Only the vertex 4 remains to be included. The minimum weighted edge from the incomplete MST to 4 is {3, 4}. The final structure of the MST is as follows and the weight of the edges of the MST is = 4 + 8 + 1 + 2 + 4 + 2 + 7 + 9 = 37 The Final Structure Of MST NOTE: Here, If we consider the edge {1, 2} in the third step then the MST would look like the following 6) Kruskal’s Algorithm: Given a weighted connected graph, Kruskal’s algorithm determines a minimum spanning tree of that graph. Steps involved in finding a MST using Kruskal’s algorithm: 1.​ Choose an edge in the graph with minimum weight. 2.​ successively add edges with minimum weight that do not form a simple circuit with those edges already chosen. 3.​ Stop after n - 1 edges have been selected. Consider the following graph for which we need to find the minimum spanning tree: Step 1: Pick edge 7-6. No cycle is formed, include it. Step 2: Pick edge 8-2. No cycle is formed, include it. Step 3: Pick edge 6-5. No cycle is formed, include it. Step 4: Pick edge 0-1. No cycle is formed, include it. Step 5: Pick edge 2-5. No cycle is formed, include it. Step 6: Pick edge 8-6. Since including this edge results in the cycle, discard it. Pick edge 2-3: No cycle is formed, include it Step 7: Pick edge 7-8. Since including this edge results in the cycle, discard it. Pick edge 0-7. No cycle is formed, include it. Step 8: Pick edge 1-2. Since including this edge results in the cycle, discard it. Pick edge 3-4. No cycle is formed, include it. This is the final MST Abstraction & Abstract Algebra ​ A)​Abstraction:​ The process of deriving general rules and concepts from specific examples. For example, abstracting a leather soccer ball to the more general idea of a ball. B)​ Abstract Algebra:​ Abstract algebra is the branch of algebra that studies algebraic systems or structures with one or more mathematical operations associated with elements with an identifiable pattern, differing from the usual number systems. In abstract algebra, the elements combined to perform mathematical operations are not interpretable as numbers, hence its abstract nature. Example: Can we think of an operation like addition abstractly? Yes, Addition on the Natural numbers is simply a function of the form N x N -> N Can we define binary operations of this form on other sets? Yes, For any set S, S x S -> S C)​Closed Binary Operation: A closed binary operation upon two elements of a set that always yields an element from the same set. ​ The addition on Natural numbers is a closed binary operation. ​ Subtraction on Natural numbers is not a closed binary operation. D)​Associative Binary Operation: A binary operation ⨁ defined on set A is associative if for all a, b, c in A, (a ⨁ b) ⨁ c = a ⨁ (b ⨁ c) Example: Addition on Natural numbers An example of non-associative operations on natural numbers: a ⨁ b = a2 + b2 (a ⨁ b) ⨁ c = a4 + b4 + c2 + 2a2 b2 a ⨁ (b ⨁ c) = a2 + b4 + c4 + 2b2c2 8) Semi-groups and Monoids: A pair (S, ⨁), where S is a set and ⨁ is a binary operation on S, is called an algebraic system​ A)​Semi-group:​ Let (S, ⨁) be an algebraic system. (S, ⨁) is called a Semi-group if the following are true: 1.​ ⨁ is a closed operation. 2.​ ⨁ is an associative operation. Examples: 1.​ Natural numbers with addition, form a Semi-group. 2.​ Natural numbers with the operation a ⨁ b = a2 + b2 is not a Semi-group. B) Left Identity and Right Identity: ​ Suppose * is a binary operation on a set T. ​ An element el is called the left identity for the operation *, if for every x belongs to T, ei * x = x. ​ An element er is called the right identity for the operation *, if for every x belongs to T, x* er = x.​ Theorem: ei = er C) Monoid:​ Let (S, ⨁) be an algebraic system. (S, ⨁) is called a Monoid, if the following are true: 1.​ ⨁ is a closed operation. 2.​ ⨁ is an associative operation. 3.​ There is an identity. Examples: 1.​ Integers with addition form monoid. 2.​ Positive integers with addition form a Semi-group but not a Monoid. 9) Groups: A)​Inverses: ​ Suppose * is a binary operation on a set T, with e being the identity element. ​ An element Ia is called the left inverse of an element a belongs to T if, Ia * a = e ​ An element ra is called the right inverse of an element a belongs to T if a* ra = e.​ Theorem: If * is an associative binary operation, then Ia = ra (add this in an individual slide) ​ If there exists an inverse for an element a, it is unique. ​ If there are two elements b and c, such that both b and c are inverses of a, then it implies that b = c. ​ We can prove this in a similar manner of the same proof where we substitute b and c in place of left and right inverse. ​ For an associative binary operation, refer to the inverse of an element a, if it exists as a-1 B)​ Group:​ Let (S, ⨁) be an algebraic system. (S, ⨁) is called a group if the following are true. 1.​ ⨁ is a closed operation. 2.​ ⨁ is an associative operation. 3.​ There is an identity element e. 4.​ Every element in S has an inverse. Examples: 1.​ Integers with addition form a group. 0 is the identity element and for every a, -a is the inverse. 2.​ Non-negative integers with addition form a monoid, but not a group. C) Abelian Group: A group (S, ⨁) is called an abelian group or a commutative group if ⨁ is commutative, => For all a, b belonging to S a⨁b=b⨁a 10) Subgroups and Rings: A)​Subgroups: ​ Suppose (S, ⨁) is a group, and T ⊆ S. ​ (T, ⨁) is called a subgroup of S, if (T, ⨁) is also a group by itself. Examples: 1.​ Integers with addition form a group. 0 is the identity and for every a, -a is the inverse. 2.​ Even integers with addition form a subgroup of the above group. 3.​ Odd integers with addition do not form a subgroup of (Z, +). B) Distributivity: ​ Suppose (S, ⨁, *) is an algebraic system ​ * is said to be distributive over ⨁ if for all a, b, c belonging to S,​ a* (b ⨁ c) = a*b ⨁ a*c C) Rings:​ An algebraic system (S, ⨁, *) is called a ring if: 1.​ (S, ⨁) is an abelian group. 2.​ (S, *) is a semi group. 3.​ * is distributive over ⨁. - The identity of ⨁ is called 0. -A ring (S, ⨁, *) in which * is also commutative is called a commutative ring. - Z, Q, R, and C are all commutative rings. Theorem: 0 * a = 0

Use Quizgecko on...
Browser
Browser