Podcast
Questions and Answers
Which of the following is an example of an oscillator?
Which of the following is an example of an oscillator?
- A sprinkler (correct)
- A static bookshelf
- A fixed light bulb
- A stationary table
All periodic motion is harmonic.
All periodic motion is harmonic.
False (B)
What is the term for one complete back and forth motion?
What is the term for one complete back and forth motion?
cycle
The maximum amount a pendulum swings away from its resting position is called the __________.
The maximum amount a pendulum swings away from its resting position is called the __________.
What force is responsible for returning a system to rest in harmonic motion?
What force is responsible for returning a system to rest in harmonic motion?
Match each term with its correct definition:
Match each term with its correct definition:
What unit is frequency measured in?
What unit is frequency measured in?
Applying force decreases amplitude of a wave.
Applying force decreases amplitude of a wave.
What term describes the phenomenon where sound waves overlap and their amplitudes add to each other?
What term describes the phenomenon where sound waves overlap and their amplitudes add to each other?
In a longitudinal wave, the motion of the medium is __________ to the direction of the wave.
In a longitudinal wave, the motion of the medium is __________ to the direction of the wave.
Flashcards
Oscillator
Oscillator
A device or object that has motion that repeats.
Harmonic motion
Harmonic motion
Motion that repeats AND is caused by a restoring force (like gravity).
Cycle
Cycle
One complete back and forth motion.
Period
Period
Signup and view all the flashcards
Frequency
Frequency
Signup and view all the flashcards
Amplitude
Amplitude
Signup and view all the flashcards
Wave
Wave
Signup and view all the flashcards
Medium
Medium
Signup and view all the flashcards
Longitudinal Wave
Longitudinal Wave
Signup and view all the flashcards
Amplitude
Amplitude
Signup and view all the flashcards
Study Notes
- Modular Arithmetic focuses on remainders after division
- Key concepts include congruence, modular inverses, and their applications in cryptography
Congruence
- $a$ is congruent to $b$ modulo $m$ if $m$ divides $a-b$
- This is written as $a \equiv b \mod m$
- $23 \equiv 3 \mod 5$ because $23-3 = 20$, which is divisible by $5$
Properties of Congruence
- If $a \equiv b \mod m$ and $c \equiv d \mod m$, then $a+c \equiv b+d \mod m$ and $ac \equiv bd \mod m$
- If $a \equiv b \mod m$, then $a^k \equiv b^k \mod m$ for any positive integer $k$
Modular Inverses
- The inverse of $a$ modulo $m$ is an integer $x$ such that $ax \equiv 1 \mod m$
- The inverse of $3$ modulo $7$ is $5$ because $3 * 5 = 15 \equiv 1 \mod 7$
Finding Modular Inverses
- The Extended Euclidean Algorithm is used to find modular inverses
- To find the inverse of $5$ modulo $12$:
- Apply the Euclidean Algorithm to find $\gcd(5, 12)$:
- $12 = 2 \cdot 5 + 2$
- $5 = 2 \cdot 2 + 1$
- $2 = 2 \cdot 1 + 0$
- $\gcd(5, 12) = 1$
- Apply the Extended Euclidean Algorithm to express $1$ as a linear combination of $5$ and $12$:
- $1 = 5 - 2 \cdot 2$
- $1 = 5 - 2 \cdot (12 - 2 \cdot 5)$
- $1 = 5 - 2 \cdot 12 + 4 \cdot 5$
- $1 = 5 \cdot 5 - 2 \cdot 12$
- From the equation $1 = 5 \cdot 5 - 2 \cdot 12$, we have $5 \cdot 5 \equiv 1 \mod 12$, so the inverse of $5$ modulo $12$ is $5$
Example 1
- What is $5^{100} \mod 7$?
- $5^1 \equiv 5 \mod 7$
- $5^2 \equiv 25 \equiv 4 \mod 7$
- $5^3 \equiv 5 \cdot 4 \equiv 20 \equiv 6 \mod 7$
- $5^4 \equiv 5 \cdot 6 \equiv 30 \equiv 2 \mod 7$
- $5^5 \equiv 5 \cdot 2 \equiv 10 \equiv 3 \mod 7$
- $5^6 \equiv 5 \cdot 3 \equiv 15 \equiv 1 \mod 7$
- Therefore $5^{100} \equiv 5^{6 \cdot 16 + 4} \equiv (5^6)^{16} \cdot 5^4 \equiv 1^{16} \cdot 2 \equiv 2 \mod 7$
Example 2
- What is the inverse of $11 \mod 26$?
- Using the Extended Euclidean Algorithm:
- $26 = 2 \cdot 11 + 4$
- $11 = 2 \cdot 4 + 3$
- $4 = 1 \cdot 3 + 1$
- $1 = 4 - 1 \cdot 3$
- $1 = 4 - 1 \cdot (11 - 2 \cdot 4)$
- $1 = 4 - 1 \cdot 11 + 2 \cdot 4$
- $1 = 3 \cdot 4 - 1 \cdot 11$
- $1 = 3 \cdot (26 - 2 \cdot 11) - 1 \cdot 11$
- $1 = 3 \cdot 26 - 6 \cdot 11 - 1 \cdot 11$
- $1 = 3 \cdot 26 - 7 \cdot 11$
- Therefore $-7 \cdot 11 \equiv 1 \mod 26$
- $-7 \equiv 19 \mod 26$
- $19 \cdot 11 \equiv 1 \mod 26$, the inverse of $11 \mod 26$ is $19$
Bernoulli's Principle
- Johann Bernoulli discovered it in the 18th century
- States for an inviscid flow, an increase in speed of the fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid's potential energy
- Bernoulli's principle can be applied to various types of fluid flow, resulting in what is loosely denoted as Bernoulli's equation
Assumptions for Application
- Steady flow means the fluid's velocity at a point remains constant over time
- Incompressible flow means the fluid's density is constant along the streamline
- Inviscid flow means friction due to viscosity is negligible
Bernoulli's Equation
- $P + \frac{1}{2} \rho v^2 + \rho g h = constant$
- $P$ is the static pressure of the fluid
- $\rho$ is the density of the fluid
- $v$ is the velocity of the fluid
- $g$ is the acceleration due to gravity
- $h$ is the height of the point above a reference plane
- This equation implies that if the speed of a fluid increases, the pressure decreases, and vice versa
Applications of Bernoulli's Principle
- Airplane Wings: Designed so air flows faster over the top, creating lower pressure, which results in lift
- Venturi Tubes: Constricted sections increase speed and decrease pressure used in carburetors
- Atomizers and Sprayers: Fast-moving air creates low pressure, drawing liquid up a tube for a fine spray
- Chimneys: Tall design utilizes wind to reduce pressure, drawing smoke out
Dijkstra's Algorithm
Motivation
- Given a graph $G = (V,E)$ and source node $s \in V$, find the shortest path from s to every other vertex $v \in V$
- BFS finds the shortest path in unweighted graphs
- Solution: Explore vertices in order of their distances from s, while maintaining $dist[v]$ for every vertex
- $dist[v]$ = length of the shortest path seen so far from $s$ to $v$
Algorithm Steps
- Initialize $dist[s] = 0$ and $dist[v] = \infty$ for all other $v \neq s$
- Let $S = {}$
- While $S \neq V$:
- Find the vertex $v \notin S$ with the smallest value of $dist[v]$
- Add $v$ to $S$
- For each neighbor $u$ of $v$:
- $dist[u] = \min(dist[u], dist[v] + w(v,u))$
Example Application
- Running Dijkstra's algorithm on an example graph (A to E) visually outlines the process of iteratively updating distances
- Step-by-step examples trace the changes in the
dist
array and the set $S$ as the algorithm progresses
Correctness Proof
- A base case, inductive hypothesis, and inductive step formally prove that when a vertex $v$ is added to $S$, dist[v] represents the shortest path from $s$ to $v$
Implementation Considerations
- Two main approaches to implementing the line "Find the vertex $v \notin S$ with the smallest value of $dist[v]$" are:
- Iterate through all vertices not in S:
- Time complexity O(n), leading to a total time complexity of O(n^2)
- Using a priority queue (min-heap):
- Time complexity O(log n), leading to a total time complexity of O(m log n)
Code Snippets
- Presented are Python code implementations using both an iterative search and a priority queue, each with its corresponding time complexity analysis
Negative Edge Weights
- Dijkstra's algorithm does not work correctly when graphs have negative edge weights
- The Bellman-Ford algorithm can be used in these cases
Statistiques descriptives univariées (Univariate Descriptive Statistics)
Définitions (Definitions)
- Population: All individuals or objects of interest
- Échantillon (Sample): A subset of the population
- Variable: A measured or observed characteristic for each individual or object in the population or sample
- Données (Data): The set of observed values for one or more variables
Types de variables (Types of Variables)
- Variables qualitatives (catégorielles)/Categorical Variables
- Nominales (Nominal): Categories cannot be ordered (e.g., eye color, car brand)
- Ordinales (Ordinal): Categories can be ordered (e.g., satisfaction level, social class)
- Variables quantitatives (numériques)/Quantitative Variables
- Discrètes (Discrete): Values are integers (e.g., number of children, number of cars)
- Continues (Continuous): Values can be any number within an interval (e.g., height, weight, temperature)
Mesures de tendance centrale (Measures of Central Tendency)
- Moyenne (Mean): Sum of all values divided by the number of values
- Notation: $\qquad \bar{x} = \frac{\sum_{i=1}^{n} x_i}{n}$
- Médiane (Median): The value that separates the data into two equal parts
- Mode: The value that appears most often in the data
Mesures de dispersion (Measures of Dispersion)
- ÉTendue (Range): Difference between the maximum and minimum values
- Variance: Average of the squares of the deviations from the mean
- Notation: $\qquad s^2 = \frac{\sum_{i=1}^{n} (x_i - \bar{x})^2}{n-1}$
- ÉCart type (Standard Deviation): Square root of the variance
- Notation: $\qquad s = \sqrt{s^2}$
- Coefficient de variation (CV)/Coefficient of Variation: Ratio of the standard deviation to the mean (expressed as a percentage)
- Notation: $\qquad CV = \frac{s}{\bar{x}} \cdot 100$
Mesures de position (Measures of Position)
- Quartiles: Divide the data into four equal parts
- $Q_1$: First quartile (25% of data are less than or equal to this value)
- $Q_2$: Second quartile (median)
- $Q_3$: Third quartile (75% of data are less than or equal to this value)
- Déciles (Deciles): Divide the data into ten equal parts
- Percentiles: Divide the data into one hundred equal parts
Représentations graphiques (Graphical Representations)
- Variables qualitatives (Qualitative Variables)
- Diagramme à barres (Bar chart)
- Diagramme circulaire/camembert (Pie chart)
- Variables quantitatives (Quantitative Variables)
- Histogramme (Histogram)
- Boîte à moustaches (Box plot)
- Nuage de points (Scatter plot)
Tableaux de fréquences (Frequency Tables)
- Fréquence absolue (Absolute Frequency): Number of occurrences of a value
- Fréquence relative (Relative Frequency): Proportion of occurrences of a value relative to the total number of values
- Fréquence cumulée (Cumulative Frequency): Sum of the absolute or relative frequencies up to a given value
Exemple de tableau de fréquences (Example Frequency Table)
Valeur (Value) | Fréquence absolue (Absolute Frequency) | Fréquence relative (Relative Frequency) | Fréquence cumulée (Cumulative Frequency) |
---|---|---|---|
1 | 5 | 0.25 | 5 |
2 | 8 | 0.40 | 13 |
3 | 4 | 0.20 | 17 |
4 | 3 | 0.15 | 20 |
Total | 20 | 1.00 |
Formules utiles (Useful Formulas)
- Moyenne pondérée (Weighted Average)
- Notation: $\qquad \bar{x}w = \frac{\sum{i=1}^{n} w_i x_i}{\sum_{i=1}^{n} w_i}$
- $w_i$ are the weights associated with the values $x_i$
- Covariance (échantillon)/Sample Covariance
- Notation: $\qquad cov(x, y) = \frac{\sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y})}{n-1}$
- Coefficient de corrélation de Pearson/Pearson Correlation Coefficient
- Notation: $\qquad r = \frac{cov(x, y)}{s_x s_y}$
- $s_x$ and $s_y$ are the standard deviations of $x$ and $y$, respectively
Atomic Habits by James Clear
Introduction
- Guides improvement actionably rather than theoretically
My Story
- Detailed account of events in James Clear's life that led to the development of good habits
- This includes baseball career, injury, recovery, academic achievements, and athletic accomplishments through small habits
The Surprising Power of Atomic Habits
Why Small Improvements Matter
- 1% better every day counts in the long run
- Habits are the compound interest of self-improvement
- Success stems from daily habits, not single transformations
Forget About Goals, Focus on Systems Instead
- Defining goals versus systems
- Goals = desired outcome
- System = processes leading to the goal
- Discusses problems that arise when focusing on goals rather than systems, and that the focus be on the system
How Your Habits Shape Your Identity (and Vice Versa)
- The three Layers of behavior change: Outcomes, Processes and Identity
- Focus on who to become, not only what to achieve
- Every action is a vote for the type of person you wish to become
- Two-step process: decide the type of person you aspire to be, prove it with small wins
The Four Laws of Behavior Change
- A loop is composed of Cue, Craving, Response and Reward
The 1st Law (Cue): Make it Obvious
- Implementation intention: Specify WHEN and WHERE the habit will occur
- "I will [BEHAVIOR] at [TIME] in [LOCATION]."
- Habit stacking: Pair a new habit with a current habit
- After [CURRENT HABIT], I will [NEW HABIT].
The 2nd Law (Craving): Make It Attractive
- Temptation bundling: Pair a habit you need to do with a habit you want to do.
- After [HABIT I NEED TO DO], I will [HABIT I WANT TO DO].
- Join a culture where your desired behavior is the normal behavior
The 3rd Law (Response): Make It Easy
- Reduce friction: lower the barrier to entry for good habits.
- Master the Two-Minute Rule: Start new habits in under two minutes.
The 4th Law (Reward): Make It Satisfying
- Use a Habit Tracker: A simple way to measure whether you did a habit.
- Never Miss Twice: Getting back on track as soon as possible.
- Missing one measurement is an accident. Missing two is the start of a new habit.
Linear Algebra and Matrix Analysis
Chapter 1: Introduction
- Vecteurs (Vectors)
- A one-dimensional array of numbers
- Column Vector example: $\mathbf{x} = \begin{bmatrix} 1 \ 2 \ 3 \end{bmatrix}$
- Row Vector example: $\mathbf{x} = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}$
- Matrices (Matrices)
- A two-dimensional array of numbers
- Example: $\mathbf{A} = \begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \ 7 & 8 & 9 \end{bmatrix}$
- Opérations de base (Basic Operations)
- Addition de vecteurs (Vector Addition)
- If $\mathbf{x} = \begin{bmatrix} x_1 \ x_2 \ \vdots \ x_n \end{bmatrix}$ and $\mathbf{y} = \begin{bmatrix} y_1 \ y_2 \ \vdots \ y_n \end{bmatrix}$, then $\mathbf{x} + \mathbf{y} = \begin{bmatrix} x_1 + y_1 \ x_2 + y_2 \ \vdots \ x_n + y_n \end{bmatrix}$
- Multiplication scalaire (Scalar Multiplication)
- If $\mathbf{x} = \begin{bmatrix} x_1 \ x_2 \ \vdots \ x_n \end{bmatrix}$ and $c$ is a scalar, then $c\mathbf{x} = \begin{bmatrix} cx_1 \ cx_2 \ \vdots \ cx_n \end{bmatrix}$
- Multiplication de matrices (Matrix Multiplication)
- If $\mathbf{A}$ is a $m \times n$ matrix and $\mathbf{B}$ is a $n \times p$ matrix, then $\mathbf{C} = \mathbf{AB}$ is a $m \times p$ matrix, where $c_{ij} = \sum_{k=1}^{n} a_{ik}b_{kj}$
- Transposition (Transposition)
- The transpose of a matrix $\mathbf{A}$, denoted $\mathbf{A}^T$, is obtained by swapping the rows and columns of $\mathbf{A}$
- If $\mathbf{A} = \begin{bmatrix} 1 & 2 \ 3 & 4 \end{bmatrix}$, then $\mathbf{A}^T = \begin{bmatrix} 1 & 3 \ 2 & 4 \end{bmatrix}$
Chapter 2: Systèmes d'Équations Linéaires (Systems of Linear Equations)
- Représentation matricielle (Matrix Representation)
- A system of linear equations can be represented in matrix form as $\mathbf{Ax} = \mathbf{b}$, where $\mathbf{A}$ is the matrix of coefficients, $\mathbf{x}$ is the vector of variables, and $\mathbf{b}$ is the vector of constants
- Méthodes de résolution (Solution Methods)
- Élimination de Gauss (Gaussian Elimination)
- A method to solve systems of linear equations by transforming the augmented matrix $[\mathbf{A} | \mathbf{b}]$ into a reduced echelon form
- Règle de Cramer (Cramer's Rule)
- Another method to solve systems of linear equations using determinants
Chapter 3: Espaces Vectoriels (Vector Spaces)
- Définition (Definition)
- A non-empty set $V$ of objects, called vectors, on which two operations are defined: addition and scalar multiplication
- Sous-espaces vectoriels (Vector Subspaces)
- A subset of a vector space that is itself a vector space
- Base et dimension (Basis and Dimension)
- A basis of a vector space is a set of linearly independent vectors that span the vector space
- The dimension of a vector space is the number of vectors in a basis
Chapter 4: Valeurs Propres et Vecteurs Propres (Eigenvalues and Eigenvectors)
- Définition (Definition)
- An eigenvector of a matrix $\mathbf{A}$ is a non-zero vector $\mathbf{v}$ such that $\mathbf{Av} = \lambda\mathbf{v}$, where $\lambda$ is the eigenvalue of $\mathbf{A}$
- Calcul des valeurs propres (Calculating Eigenvalues)
- The eigenvalues of a matrix $\mathbf{A}$ are the solutions of the characteristic equation $\det(\mathbf{A} - \lambda\mathbf{I}) = 0$, where $\mathbf{I}$ is the identity matrix
- Diagonalisation (Diagonalization)
- A matrix $\mathbf{A}$ is diagonalizable if there exists an invertible matrix $\mathbf{P}$ such that $\mathbf{P}^{-1}\mathbf{AP}$ is a diagonal matrix
Formules Importantes (Important Formulas)
- Produit scalaire (Dot Product): $\mathbf{x} \cdot \mathbf{y} = \sum_{i=1}^{n} x_i y_i$
- Norme d'un vecteur (Vector Norm): $||\mathbf{x}|| = \sqrt{\sum_{i=1}^{n} x_i^2}$
- Déterminant d'une matrice 2x2 (Determinant of a 2x2 Matrix): $\det(\begin{bmatrix} a & b \ c & d \end{bmatrix}) = ad - bc$
Regulation of Gene Expression
Introduction
- Gene expression converts a gene's information into a functional gene product, like proteins or functional RNA
- Expression is precisely controlled to adjust to changing conditions while saving energy and resources
- Precise levels of gene products are produced in a spatial-temporal context
- Regulation can occur at transcription, RNA processing, RNA transport, translation, and protein modification
- Transcription regulation, deciding if a gene is transcribed, is a common control mechanism
Regulation of Transcription
Prokaryotes
- Prokaryotes regulate transcription to respond to environmental changes
- Operons are a key mechanism, consisting of:
- Promoter: Where RNA polymerase binds
- Operator: Controls RNA polymerase access
- Genes: DNA stretch transcribed into a single mRNA
- Activators switch on some operons
- Repressors switch off other operons, preventing transcription by:
- binding the operator
- blocking RNA polymerase
- Repressor form depends on a corepressor/inducer binding
Eukaryotes
- Eukaryotic expression is more complex
- Chromatin structure's role:
- DNA packaged with proteins into chromatin
- Heterochromatin is tightly condensed, not usually transcribed
- Euchromatin is loosely packed, able to be transcribed
- Histone acetylation and DNA methylation affect chromatin
- Transcription factors are essential for regulation:
- General factors are needed for all protein-coding transcription
- Specific factors bind control elements, influence transcription rates
- Enhancers are distant DNA sequences
- Activators bind enhancers and interact with other proteins to initiate transcription
- Combinatorial control: needs multiple transcription factors
RNA Processing
- RNA processing includes:
- RNA splicing: Intron removal, exon joining
- Alternative splicing: Different mRNAs from one pre-mRNA by treating segments differently
Translation and Protein Processing
- Translation can be regulated by:
- Initiation factors: Required to bind mRNA to ribosomes
- Regulatory proteins: Bind mRNA, prevent translation
- Protein processing:
- Cleavage: Cutting the polypeptide chain
- Chemical modifications: Adding phosphates/sugars
- Protein degradation: Proteins are marked by ubiquitin and degraded by proteasomes
Non-coding RNAs
- Non-coding RNAs (ncRNAs) regulate expression
- MicroRNAs (miRNAs) bind/block/degrade mRNA
- Small interfering RNAs (siRNAs) are similar to miRNAs
- Piwi-associated RNAs (piRNAs) play a role in heterochromatin formation and block transposons
Cancer
- Cancer may result from mutated genes that regulate growth
- Oncogenes promote growth
- Tumor suppressor genes inhibit growth
- Mutated genes can lead to uncontrolled growth and cancer
Chemistry Thermodynamics
Spontaneous Processes
- Spontaneous processes occur without external intervention, like a ball rolling downhill or rust forming on iron
Entropy (S)
- Entropy measures the dispersal of energy in a system
- Measured in J/K or J/(molâ‹…K)
- It is a state function: $\Delta S = S_{final} - S_{initial}$
Changes That Increase Entropy ($\Delta S > 0$)
- Phase Changes: Order goes from $S_{solid} < S_{liquid} < S_{gas}$ with melting, vaporization and sublimation increasing entropy
- Increase in Temperature: More kinetic energy exists at higher temperatures
- Increase in Volume: More space allows more dispersal of energy
- Increase in the Number of Gas Molecules: More molecules mean more ways to distribute energy
- For example, $2H_2O_2(g) \rightarrow 2H_2O(g) + O_2(g)$ increase entropy
Entropy Changes in Chemical Reactions
- For the reaction $aA + bB \rightarrow cC + dD$:
- $\Delta S_{rxn}^\circ = \Sigma n \Delta S^\circ (products) - \Sigma n \Delta S^\circ (reactants)$
- Here, n is the stoichiometric coefficient
Example
- Calculating the change in entropy for the reaction $N_2(g) + 3H_2(g) \rightarrow 2NH_3(g)$ gives:
- $\Delta S_{rxn}^\circ = [2 \cdot S^\circ (NH_3(g))] - [S^\circ (N_2(g)) + 3 \cdot S^\circ (H_2(g))]$
- Using values of:
- $S^\circ (N_2(g)) = 191.5 J/(mol \cdot K)$
- $S^\circ (H_2(g)) = 130.6 J/(mol \cdot K)$
- $S^\circ (NH_3(g)) = 192.3 J/(mol \cdot K)$
- The result is:
- $\Delta S_{rxn}^\circ = -198.7 J/K$
Entropy and the Second Law of Thermodynamics
- Second Law: the entropy of the universe increases during any spontaneous process.
- $\Delta S_{universe} = \Delta S_{system} + \Delta S_{surroundings} > 0$
- For a reversible process, $\Delta S_{universe} = 0$
Entropy of Surroundings
- $\Delta S_{surroundings} = \frac{- \Delta H_{system}}{T}$
- Exothermic process: $\Delta H_{system} < 0$, so $\Delta S_{surroundings} > 0$
- Endothermic process: $\Delta H_{system} > 0$, so $\Delta S_{surroundings} < 0$
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.