🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

HISTORY OF MATH_LEARNING MODULE-VON.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

COLLEGE OF SCIENCES TECHNOLOGY AND COMMUNICATIONS, INC. SCHOOL OF TEACHER EDUCATION A.Y. 2023-2024 Learning Module TOPIC: Modern Algebra and Number Theory - Birth of Set Theory and Problems in...

COLLEGE OF SCIENCES TECHNOLOGY AND COMMUNICATIONS, INC. SCHOOL OF TEACHER EDUCATION A.Y. 2023-2024 Learning Module TOPIC: Modern Algebra and Number Theory - Birth of Set Theory and Problems in the Foundations of Mathematics Submitted by: VON RUSSEL D. MANALO Bachelor of Secondary Education Major in Math Submitted to: Mr. Victor M. Disilio Cooperating Teacher I. Objectives: At the end of this lesson, the students should be able to:  Define what is modern algebra and number theory;  Understand the student what is algebraand number theory; and,  classify the different theory. II. Discussion Proper: Modern Algebra, branch of mathematics concerned with the general algebraic structure of various sets (such as real numbers, complex numbers, matrices, and vector spaces), rather than rules and procedures for manipulating their individual elements. During the second half of the 19th century, various important mathematical advances led to the study of sets in which any two elements can be added or multiplied together to give a third element of the same set. The elements of the sets concerned could be numbers, functions, or some other objects. As the techniques involved were similar, it seemed reasonable to consider the sets, rather than their elements, to be the objects of primary concern. A definitive treatise, Modern Algebra, was written in 1930 by the Dutch mathematician Bartel van der Waerden, and the subject has had a deep effect on almost every branch of mathematics. Basic algebraic structures In itself a set is not very useful, being little more than a well-defined collection of mathematical objects. However, when a set has one or more operations (such as addition and multiplication) defined for its elements, it becomes very useful. If the operations satisfy familiar arithmetic rules (such as associativity, commutativity, and distributivity) the set will have a particularly “rich” algebraic structure. Sets with the richest algebraic structure are known as fields. Familiar examples of fields are the rational numbers (fractions a/b where a and b are positive or negative whole numbers), the real numbers (rational and irrational numbers), and the complex numbers (numbers of the form a + bi where a and b are real numbers and i2 = −1). Each of these is important enough to warrant its own special symbol: ℚ for the rationals, ℝ for the reals, and ℂ for the complex numbers. The term field in its algebraic sense is quite different from its use in other contexts, such as vector fields in mathematics or magnetic fields in physics. Other languages avoid this conflict in terminology; for example, a field in the algebraic sense is called a corps in French and a Körper in German, both words meaning “body.” In addition to the fields mentioned above, which all have infinitely many elements, there exist fields having only a finite number of elements (always some power of a prime number), and these are of great importance, particularly for discrete mathematics. In fact, finite fields motivated the early development of abstract algebra. The simplest finite field has only two elements, 0 and 1, where 1 + 1 = 0. This field has applications to coding theory and data communication. Structural axioms The basic rules, or axioms, for addition and multiplication are shown in the table, and a set that satisfies all 10 of these rules is called a field. A set satisfying only axioms 1– 7 is called a ring, and if it also satisfies axiom 9 it is called a ring with unity. A ring satisfying the commutative law of multiplication (axiom 8) is known as a commutative ring. When axioms 1–9 hold and there are no proper divisors of zero (i.e., whenever ab = 0 either a = 0 or b = 0), a set is called an integral domain. For example, the set of integers {…, −2, −1, 0, 1, 2, …} is a commutative ring with unity, but it is not a field, because axiom 10 fails. When only axiom 8 fails, a set is known as a division ring or skew field. Quaternions and abstraction The discovery of rings having noncommutative multiplication was an important stimulus in the development of modern algebra. For example, the set of n-by- n matrices is a noncommutative ring, but since there are nonzero matrices without inverses, it is not a division ring. The first example of a noncommutative division ring was the quaternions. These are numbers of the form a + bi + cj + dk, where a, b, c, and d are real numbers and their coefficients 1, i, j, and k are unit vectors that define a four-dimensional space. Quaternions were invented in 1843 by the Irish mathematician William Rowan Hamilton to extend complex numbers from the two- dimensional plane to three dimensions in order to describe physical processes mathematically. Hamilton defined the following rules for quaternion multiplication: i2 = j2 = k2 = −1, ij = k = −ji, jk = i = −kj, and ki = j = −ik. After struggling for some years to discover consistent rules for working with his higher- dimensional complex numbers, inspiration struck while he was strolling in his hometown of Dublin, and he stopped to inscribe these formulas on a nearby bridge. In working with his quaternions, Hamilton laid the foundations for the algebra of matrices and led the way to more abstract notions of numbers and operations. Group theory In addition to developments in number theory and algebraic geometry, modern algebra has important applications to symmetry by means of group theory. The word group often refers to a group of operations, possibly preserving the symmetry of some object or an arrangement of like objects. In the latter case the operations are called permutations, and one talks of a group of permutations, or simply a permutation group. If α and β are operations, their composite (α followed by β) is usually written αβ, and their composite in the opposite order (β followed by α) is written βα. In general, αβ and βα are not equal. A group can also be defined axiomatically as a set with multiplication that satisfies the axioms for closure, associativity, identity element, and inverses (axioms 1, 6, 9, and 10). In the special case where αβ and βα are equal for all α and β, the group is called commutative, or Abelian; for such Abelian groups, operations are sometimes written α + β instead of αβ, using addition in place of multiplication. The first application of group theory was by the French mathematician Évariste Galois (1811–32) to settle an old problem concerning algebraic equations. The question was to decide whether a given equation could be solved using radicals (meaning square roots, cube roots, and so on, together with the usual operations of arithmetic). By using the group of all “admissible” permutations of the solutions, now known as the Galois group of the equation, Galois showed whether or not the solutions could be expressed in terms of radicals. His was the first important use of groups, and he was the first to use the term in its modern technical sense. It was many years before his work was fully understood, in part because of its highly innovative character and in part because he was not around to explain his ideas—at the age of 20 he was mortally wounded in a duel. The subject is now known as Galois theory. Group theory developed first in France and then in other European countries during the second half of the 19th century. One early and essential idea was that many groups, and in particular all finite groups, could be decomposed into simpler groups in an essentially unique way. These simpler groups could not be decomposed further, and so they were called “simple,” although their lack of further decomposition often makes them rather complex. This is rather like decomposing a whole number into a product of prime numbers, or a molecule into atoms. In 1963 a landmark paper by the American mathematicians Walter Feit and John Thompson showed that if a finite simple group is not merely the group of rotations of a regular polygon, then it must have an even number of elements. This result was immensely important because it showed that such groups had to have some elements x such that x2 = 1. Using such elements enabled mathematicians to get a handle on the structure of the whole group. The paper led to an ambitious program for finding all finite simple groups that was completed in the early 1980s. It involved the discovery of several new simple groups, one of which, the “Monster,” cannot operate in fewer than 196,883 dimensions. The Monster still stands as a challenge today because of its intriguing connections with other parts of mathematics. Emmy Noether (born March 23, 1882, Erlangen, Germany—died April 14, 1935, Bryn Mawr, Pennsylvania, U.S.) was a German mathematician whose innovations in higher algebra gained her recognition as the most creative abstract algebraist of modern times. Noether was certified to teach English and French in schools for girls in 1900, but she instead chose to study mathematics at the University of Erlangen (now University of Erlangen-Nürnberg). At that time, women were only allowed to audit classes with the permission of the instructor. She spent the winter of 1903–04 auditing classes at the University of Göttingen taught by mathematicians David Hilbert, Felix Klein, and Hermann Minkowski and astronomer Karl Schwarzschild. She returned to Erlangen in 1904 when women were allowed to be full students there. She received a Ph.D. degree from Erlangen in 1907, with a dissertation on algebraic invariants. She remained at Erlangen, where she worked without pay on her own research and assisting her father, mathematician Max Noether (1844–1921). In 1915 Noether was invited to Göttingen by Hilbert and Klein and soon used her knowledge of invariants helping them to explore the mathematics behind Albert Einstein’s recently published theory of general relativity. Hilbert and Klein persuaded her to remain there despite the vehement objections of some faculty members to a woman teaching at the university. Nevertheless, she could only lecture in classes under Hilbert’s name. In 1918 Noether discovered that if the Lagrangian (a quantity that characterizes a physical system; in mechanics, it is kinetic minus potential energy) does not change when the coordinate system changes, then there is a quantity that is conserved. For example, when the Lagrangian is independent of changes in time, then energy is the conserved quantity. This relation between what are known as the symmetries of a physical system and its conservation laws is known as Noether’s theorem and has proven to be a key result in theoretical physics. She won formal admission as an academic lecturer in 1919. From 1927 Noether concentrated on noncommutative algebras (algebras in which the order in which numbers are multiplied affects the answer), their linear transformations, and their application to commutative number fields. She built up the theory of noncommutative algebras in a newly unified and purely conceptual way. In collaboration with Helmut Hasse and Richard Brauer, she investigated the structure of noncommutative algebras and their application to commutative fields by means of cross product (a form of multiplication used between two vectors). Important papers from this period are “Hyperkomplexe Grössen und Darstellungstheorie” (1929; “Hypercomplex Number Systems and Their Representation”) and “Nichtkommutative Algebra” (1933; “Noncommutative Algebra”). Group theory, in modern algebra, the study of groups, which are systems consisting of a set of elements and a binary operation that can be applied to two elements of the set, which together satisfy certain axioms. These require that the group be closed under the operation (the combination of any two elements produces another element of the group), that it obey the associative law, that it contain an identity element (which, combined with any other element, leaves the latter unchanged), and that each element have an inverse (which combines with an element to produce the identity element). If the group also satisfies the commutative law, it is called a commutative, or abelian, group. The set of integers under addition, where the identity element is 0 and the inverse is the negative of a positive number or vice versa, is an abelian group. Groups are vital to modern algebra; their basic structure can be found in many mathematical phenomena. Groups can be found in geometry, representing phenomena such as symmetry and certain types of transformations. Group theory has applications in physics, chemistry, and computer science, and even puzzles like Rubik’s Cube can be represented using group theory. Linear algebra, mathematical discipline that deals with vectors and matrices and, more generally, with vector spaces and linear transformations. Unlike other parts of mathematics that are frequently invigorated by new ideas and unsolved problems, linear algebra is very well understood. Its value lies in its many applications, from mathematical physics to modern algebra and coding theory. Vectors and vector spaces Linear algebra usually starts with the study of vectors, which are understood as quantities having both magnitude and direction. Vectors lend themselves readily to physical applications. For example, consider a solid object that is free to move in any direction. When two forces act at the same time on this object, they produce a combined effect that is the same as a single force. To picture this, represent the two forces v and w as arrows; the direction of each arrow gives the direction of the force, and its length gives the magnitude of the force. The single force that results from combining v and w is called their sum, written v + w. In the figure, v + w corresponds to the diagonal of the parallelogram formed from adjacent sides represented by v and w. Vectors are often expressed using coordinates. For example, in two dimensions a vector can be defined by a pair of coordinates (a1, a2) describing an arrow going from the origin (0, 0) to the point (a1, a2). If one vector is (a1, a2) and another is (b1, b2), then their sum is (a1 + b1, a2 + b2); this gives the same result as the parallelogram (see the figure). In three dimensions a vector is expressed using three coordinates (a1, a2, a3), and this idea extends to any number of dimensions. Representing vectors as arrows in two or three dimensions is a starting point, but linear algebra has been applied in contexts where this is no longer appropriate. For example, in some types of differential equations the sum of two solutions gives a third solution, and any constant multiple of a solution is also a solution. In such cases the solutions can be treated as vectors, and the set of solutions is a vector space in the following sense. In a vector space any two vectors can be added together to give another vector, and vectors can be multiplied by numbers to give “shorter” or “longer” vectors. The numbers are called scalars because in early examples they were ordinary numbers that altered the scale, or length, of a vector. For example, if v is a vector and 2 is a scalar, then 2v is a vector in the same direction as v but twice as long. In many modern applications of linear algebra, scalars are no longer ordinary real numbers, but the important thing is that they can be combined among themselves by addition, subtraction, multiplication, and division. For example, the scalars may be complex numbers, or they may be elements of a finite field such as the field having only the two elements 0 and 1, where 1 + 1 = 0. The coordinates of a vector are scalars, and when these scalars are from the field of two elements, each coordinate is 0 or 1, so each vector can be viewed as a particular sequence of 0s and 1s. This is very useful in digital processing, where such sequences are used to encode and transmit data. Linear transformations and matrices Vector spaces are one of the two main ingredients of linear algebra, the other being linear transformations (or “operators” in the parlance of physicists). Linear transformations are functions that send, or “map,” one vector to another vector. The simplest example of a linear transformation sends each vector to c times itself, where c is some constant. Thus, every vector remains in the same direction, but all lengths are multiplied by c. Another example is a rotation, which leaves all lengths the same but alters the directions of the vectors. Linear refers to the fact that the transformation preserves vector addition and scalar multiplication. This means that if T is a linear transformation sending a vector v to T(v), then for any vectors v and w, and any scalar c, the transformation must satisfy the properties T(v + w) = T(v) + T(w) and T(cv) = cT(v). When doing computations, linear transformations are treated as matrices. A matrix is a rectangular arrangement of scalars, and two matrices can be added or multiplied as shown in the table. The product of two matrices shows the result of doing one transformation followed by another (from right to left), and if the transformations are done in reverse order the result is usually different. Thus, the product of two matrices depends on the order of multiplication; if S and T are square matrices (matrices with the same number of rows as columns) of the same size, then ST and TS are rarely equal. The matrix for a given transformation is found using coordinates. For example, in two dimensions a linear transformation T can be completely determined simply by knowing its effect on any two vectors v and w that have different directions. Their transformations T(v) and T(w) are given by two coordinates; therefore, only four coordinates, two for T(v) and two for T(w), are needed to specify T. These four coordinates are arranged in a 2-by- 2 matrix. In three dimensions three vectors u, v, and w are needed, and to specify T(u), T(v), and T(w) one needs three coordinates for each. This results in a 3-by-3 matrix. Eigenvectors When studying linear transformations, it is extremely useful to find nonzero vectors whose direction is left unchanged by the transformation. These are called eigenvectors (also known as characteristic vectors). If v is an eigenvector for the linear transformation T, then T(v) = λv for some scalar λ. This scalar is called an eigenvalue. The eigenvalue of greatest absolute value, along with its associated eigenvector, have special significance for many physical applications. This is because whatever process is represented by the linear transformation often acts repeatedly—feeding output from the last transformation back into another transformation—which results in every arbitrary (nonzero) vector converging on the eigenvector associated with the largest eigenvalue, though rescaled by a power of the eigenvalue. In other words, the long- term behaviour of the system is determined by its eigenvectors. Finding the eigenvectors and eigenvalues for a linear transformation is often done using matrix algebra, first developed in the mid-19th century by the English mathematician Arthur Cayley. His work formed the foundation for modern linear algebra. Binomial theorem, statement that for any positive integer n, the nth power of the sum of two numbers a and b may be expressed as the sum of n + 1 terms of the form in the sequence of terms, the index r takes on the successive values 0, 1, 2,…, n. The coefficients, called the binomial coefficients, are defined by the formula in which n! (called n factorial) is the product of the first n natural numbers 1, 2, 3,…, n (and where 0! is defined as equal to 1). The coefficients may also be found in the array often called Pascal’s triangle by finding the rth entry of the nth row (counting begins with a zero in both directions). Each entry in the interior of Pascal’s triangle is the sum of the two entries above it. Thus, the powers of (a + b)n are 1, for n = 0; a + b, for n = 1; a2 + 2ab + b2, for n = 2; a3 + 3a2b + 3ab2 + b3, for n = 3; a4 + 4a3b + 6a2b2 + 4ab3 + b4, for n = 4, and so on. The theorem is useful in algebra as well as for determining permutations and combinations and probabilities. For positive integer exponents, n, the theorem was known to Islamic and Chinese mathematicians of the late medieval period. Al- Karajī calculated Pascal’s triangle about 1000 CE, and Jia Xian in the mid-11th century calculated Pascal’s triangle up to n = 6. Isaac Newton discovered about 1665 and later stated, in 1676, without proof, the general form of the theorem (for any real number n), and a proof by John Colson was published in 1736. The theorem can be generalized to include complex exponents for n, and this was first proved by Niels Henrik Abel in the early 19th century. Linear transformation, in mathematics, a rule for changing one geometric figure (or matrix or vector) into another, using a formula with a specified format. The format must be a linear combination, in which the original components (e.g., the x and y coordinates of each point of the original figure) are changed via the formula ax + by to produce the coordinates of the transformed figure. Examples include flipping the figure over the x or y axis, stretching or compressing it, and rotating it. Some such transformations have an inverse, which undoes their effect. Rational root theorem, in algebra, theorem that for a polynomial equation in one variable with integer coefficients to have a solution (root) that is a rational number, the leading coefficient (the coefficient of the highest power) must be divisible by the denominator of the fraction and the constant term (the one without a variable) must be divisible by the numerator. In algebraic notation the canonical form for a polynomial equation in one variable (x) isanxn + an− 1xn − 1 + … + a1x1 + a0 = 0,where a0, a1,…, an are ordinary integers. Thus, for a polynomial equation to have a rational solution p/q, q must divide an and p must divide a0. For example, consider 3x3 − 10x2 + x + 6 = 0. The only divisors of 3 are 1 and 3, and the only divisors of 6 are 1, 2, 3, and 6. Thus, if any rational roots exist, they must have a denominator of 1 or 3 and a numerator of 1, 2, 3, or 6, which limits the choices to 1/3, 2/3, 1, 2, 3, and 6 and their corresponding negative values. Plugging the 12 candidates into the equation yields the solutions −2/3, 1, and 3. In the case of higher-order polynomials, each root can be used to factor the equation, thereby simplifying the problem of finding further rational roots. In this example, the polynomial can be factored as (x − 1)(x + 2/3)(x − 3) = 0. Before computers were available to use the methods of numerical analysis, such calculations formed an essential part in the solution of most applications of mathematics to physical problems. The methods are still used in elementary courses in analytic geometry, though the techniques are superseded once students master basic calculus. The 17th-century French philosopher and mathematician René Descartes is usually credited with devising the test, along with Descartes’s rule of signs for the number of real roots of a polynomial. The effort to find a general method of determining when an equation has a rational or real solution led to the development of group theory and modern algebra. Multinomial theorem, in algebra, a generalization of the binomial theorem to more than two variables. In statistics, the corresponding multinomial series appears in the multinomial distribution, which is a generalization of the binomial distribution. The multinomial theorem provides a formula for expanding an expression such as (x1 + x2 +⋯ + xk)n for integer values of n. In particular, the expansion is given by where n1 + n2 +⋯ + nk = n and n! is the factorial notation for 1 × 2 × 3 ×⋯ × n. For example, the expansion of (x1 + x2 + x3)3 is x13 + 3x12x2 + 3x12x3 + 3x1x22 + 3x1x32 + 6x1x2x3 + x23 + 3x22x3 + 3x2x32 + x33. Fundamental theorem of algebra, theorem of equations proved by Carl Friedrich Gauss in 1799. It states that every polynomial equation of degree n with complex number coefficients has n roots, or solutions, in the complex numbers. The roots can have a multiplicity greater than zero. For example, x2 − 2x + 1 = 0 can be expressed as (x − 1)(x − 1) = 0; that is, the root x = 1 occurs with a multiplicity of 2. The theorem can also be stated as every polynomial equation of degree n where n ≥ 1 with complex number coefficients has at least one root. Descartes’s rule of signs, in algebra, rule for determining the maximum number of positive real number solutions (roots) of a polynomial equation in one variable based on the number of times that the signs of its real number coefficients change when the terms are arranged in the canonical order (from highest power to lowest power). For example, the polynomialx5 + x4 − 2x3 + x2 − 1 = 0changes sign three times, so it has at most three positive real solutions. Substituting −x for x gives the maximum number of negative solutions (two). The rule of signs was given, without proof, by the French philosopher and mathematician René Descartes in La Géométrie (1637). The English physicist and mathematician Sir Isaac Newton restated the formula in 1707, though no proof of his has been discovered; some mathematicians speculate that he considered its proof too trivial to bother recording. The earliest known proof was by the French mathematician Jean-Paul de Gua de Malves in 1740. The German mathematician Carl Friedrich Gauss made the first real advance in 1828 when he showed that, in cases where there are fewer than the maximum number of positive roots, the deficit is always by an even number. Thus, in the example given above, the polynomial could have three positive roots or one positive root, but it could not have two positive roots. Reference: Ronan, M. A. (2004, March 12). Modern algebra | Algebraic Structures, Rings & Group Theory. Encyclopedia Britannica. https://www.britannica.com/science/modern- algebra/Rings

Use Quizgecko on...
Browser
Browser