Podcast
Questions and Answers
Which modal verb expresses the weakest possibility?
Which modal verb expresses the weakest possibility?
- Could
- Can
- Might (correct)
- May
Which modal verb is often used for making suggestions?
Which modal verb is often used for making suggestions?
- May
- Must
- Can
- Should (correct)
Which word indicates obligation or necessity?
Which word indicates obligation or necessity?
- Must (correct)
- May
- Should
- Could
Which modal verb expresses ability or possibility?
Which modal verb expresses ability or possibility?
Which of the following expresses an action that is required?
Which of the following expresses an action that is required?
Which example shows a suggestion using should
?
Which example shows a suggestion using should
?
Which of the following sentences uses can
to express an ability?
Which of the following sentences uses can
to express an ability?
Which of the following sentences uses past perfect?
Which of the following sentences uses past perfect?
Which sentence is an example of the use of future simple?
Which sentence is an example of the use of future simple?
Which sentence uses the future continuous tense?
Which sentence uses the future continuous tense?
Which tense is used to describe actions that are habitual or repeated?
Which tense is used to describe actions that are habitual or repeated?
Which tense is used in the sentence: 'She is reading a book'?
Which tense is used in the sentence: 'She is reading a book'?
What is the structure of the present continuous tense?
What is the structure of the present continuous tense?
What is the function of present perfect tense?
What is the function of present perfect tense?
Which option shows the correct structure for past simple?
Which option shows the correct structure for past simple?
Which tense would you use to describe an action in progress at a specific time in the past?
Which tense would you use to describe an action in progress at a specific time in the past?
Which of these options is a correct past continuous sentence?
Which of these options is a correct past continuous sentence?
Which auxiliary verb is used to form the past continuous tense?
Which auxiliary verb is used to form the past continuous tense?
Which auxiliary verb is used to form the present perfect tense?
Which auxiliary verb is used to form the present perfect tense?
In the sentence, 'I eat breakfast every day.', which tense is used?
In the sentence, 'I eat breakfast every day.', which tense is used?
Flashcards
Present Simple
Present Simple
Actions that are habitual or repeated, general facts, universal truths or situations.
Present Continuous
Present Continuous
Actions that are happening at the moment of speaking, temporary actions or future planned actions.
Present Perfect
Present Perfect
Actions that started in the past and have an effect on the present, life experiences, or actions that occurred at an unspecified time before now.
Past Simple
Past Simple
Signup and view all the flashcards
Past Continuous
Past Continuous
Signup and view all the flashcards
Past Perfect
Past Perfect
Signup and view all the flashcards
Future Simple
Future Simple
Signup and view all the flashcards
Future Continuous
Future Continuous
Signup and view all the flashcards
"Will"
"Will"
Signup and view all the flashcards
"Would"
"Would"
Signup and view all the flashcards
"Ought to"
"Ought to"
Signup and view all the flashcards
"Need"
"Need"
Signup and view all the flashcards
"Can"
"Can"
Signup and view all the flashcards
"Could"
"Could"
Signup and view all the flashcards
"May"
"May"
Signup and view all the flashcards
"Might"
"Might"
Signup and view all the flashcards
"Must"
"Must"
Signup and view all the flashcards
"Should"
"Should"
Signup and view all the flashcards
Study Notes
Quantum Mechanics
- Quantum mechanics studies matter and energy at the atomic and subatomic levels.
Key Concepts
- Quantization: Properties like energy and momentum come in discrete amounts.
- Wave-Particle Duality: Particles can act like waves, and waves can act like particles.
- Uncertainty Principle: Limits exist on how precisely position and momentum can be simultaneously known.
Mathematical Representation
Wave Function
- $\Psi(r, t)$ represents the state of a quantum system.
- $|\Psi(r, t)|^2$ indicates the probability density of finding a particle at position $r$ at time $t$.
Schrödinger Equation
- $i\hbar\frac{\partial}{\partial t}\Psi(r, t) = \hat{H}\Psi(r, t)$ describes quantum state change over time.
- $i$ is the imaginary unit.
- $\hbar$ is the reduced Planck constant.
- $\frac{\partial}{\partial t}$ is the partial derivative with respect to time.
- $\hat{H}$ is the Hamiltonian operator, representing the system's total energy.
Core Principles
Superposition
- $|\Psi\rangle = c_1|\Psi_1\rangle + c_2|\Psi_2\rangle$ explains that a quantum system can be in multiple states at once.
- $|\Psi\rangle$ is the state of the system.
- $|\Psi_1\rangle$ and $|\Psi_2\rangle$ are possible states.
- $c_1$ and $c_2$ are complex numbers indicating probability amplitudes for each state.
Measurement
- Measuring a quantum system forces it into a single, definite state.
Entanglement
- Two or more quantum particles become linked inseparably.
Applications
- Quantum Computing: Uses quantum mechanics for computation.
- Quantum Cryptography: Secure communication using quantum principles.
- Materials Science: Designing and understanding new materials.
- Medical Imaging: Enhancement of imaging techniques.
Key Figures
- Max Planck: Introduced quantization.
- Albert Einstein: Explained the photoelectric effect.
- Niels Bohr: Developed the atomic model.
- Werner Heisenberg: Formulated the uncertainty principle.
- Erwin Schrödinger: Developed the Schrödinger equation.
- Paul Dirac: Integrated quantum mechanics with relativity.
Thermodynamics
Energy
-
The capacity to do work.
-
Exists in thermal, mechanical, kinetic, potential, electrical, magnetic, chemical, and nuclear forms.
-
Total energy: $E$ is the sum of all energy forms.
-
Total energy on a unit mass basis: $e = E/m$.
-
Total energy is divided into macroscopic and microscopic forms.
Macroscopic Energy
- Macroscopic energy is energy possessed by a system as a whole relative to an external reference frame.
Kinetic Energy (KE)
- $KE = \frac{1}{2}mV^2$ is energy due to motion relative to a reference frame.
- Kinetic energy on a unit mass basis: $ke = \frac{1}{2}V^2$.
Potential Energy (PE)
- $PE = mgz$ is energy due to its elevation in a gravitational field.
- Potential energy on a unit mass basis: $pe = gz$.
Microscopic Energy
- Microscopic energy is related to molecular structure and activity, independent of external reference frames.
Internal Energy
- $U$ is the sum of all microscopic energy forms.
- Internal energy on a unit mass basis is denoted by $u$.
- Internal energy $U$ comprises sensible, latent, chemical, and nuclear energies.
- Sensible Energy: Portion associated with molecular kinetic energies.
- Latent Energy: Internal energy linked to the system's phase.
- Chemical Energy: Internal energy in atomic bonds within molecules.
- Nuclear Energy: Energy within the atom's nucleus.
- Thermodynamics focuses on changes in internal energy, $\Delta u$.
Forms of Energy
- Energy crosses boundaries of closed systems as heat and work.
- Heat $Q$ is energy transferred due to temperature differences.
- Work $W$ is energy transfer from a force acting through a distance.
-
Heat and work are energy-transfer mechanisms.
The First Law of Thermodynamics
- Energy is neither created nor destroyed, only transformed.
- The change in a system's total energy equals the difference between energy entering and leaving: $E_{in} - E_{out} = \Delta E_{system}$.
- In rate form: $\dot{E}{in} - \dot{E}{out} = \frac{dE_{system}}{dt}$.
- Energy Balance: Net energy transfer equals the change in the system's energy content.
- For a closed system undergoing a cycle: $\Delta E_{system} = 0$.
Energy Transfer by Heat
- Heat is recognized only as it crosses boundaries.
- Adiabatic Process: No heat transfer occurs.
- Heat Transfer Mechanisms:
- Conduction: Energy transfer via particle interactions.
- Convection: Energy transfer between a surface and moving fluid.
- Radiation: Energy emitted as electromagnetic waves.
- Heat added to a system is positive; heat removed is negative.
Energy Transfer by Work
- Work is energy transfer via force acting over a distance.
- Power: Work done per unit time.
- Work done by the system is positive; work done on the system is negative.
Mechanical Forms of Work
Moving Boundary Work (Pdv Work)
- $W_b = \int_{1}^{2} P dV$
- Area under the process curve on a P-V diagram represents boundary work.
Shaft Work
- $W_{sh} = T\theta$
- Power transmitted: $\dot{W}_{sh} = T\omega$. $\omega$ is angular velocity.
Spring Work
- $W_{spring} = \frac{1}{2}k(x_2^2 - x_1^2)$
Electrical Work
- $W_e = VIt$
Example: Shaft Work
-
A 15-hp electric motor running at 1700 rpm applies a torque of 62.83 N.m.
$$\begin{aligned} \dot{W} &= 15 \text{ hp} = 15 \text{ hp} \left( \frac{0.7457 \text{ kW}}{1 \text{ hp}} \right) = 11.1855 \text{ kW} \ \omega &= 1700 \text{ rpm} = 1700 \frac{\text{rev}}{\text{min}} \left( \frac{2\pi \text{ rad}}{1 \text{ rev}} \right) \left( \frac{1 \text{ min}}{60 \text{ s}} \right) = 178.02 \text{ rad/s} \ \dot{W}{sh} &= T\omega \ T &= \frac{\dot{W}{sh}}{\omega} = \frac{11.1855 \text{ kW}}{178.02 \text{ rad/s}} \left( \frac{1000 \text{ N.m/s}}{1 \text{ kW}} \right) = 62.83 \text{ N.m} \end{aligned}$$
The First Law of Thermodynamics
- For a closed system: $Q - W = \Delta E = \Delta U + \Delta KE + \Delta PE$
- $Q - W = m(u_2 - u_1) + \frac{1}{2}m(V_2^2 - V_1^2) + mg(z_2 - z_1)$
-
For stationary closed systems: $Q - W = \Delta U = m(u_2 - u_1)$
Specific Heats
- The energy needed to raise the temperature of a unit mass of a substance by one degree.
- $c_v$: Specific heat at constant volume.
- $c_p$: Specific heat at constant pressure.
- Specific heat is a property, not a thermodynamic property.
- For ideal gases, $u$ depends only on $T$:
$$du = c_v(T)dT$$
$$\Delta u = u_2 - u_1 = \int_{1}^{2} c_v(T)dT$$
- For ideal gases, $h$ depends only on $T$:
$$dh = c_p(T)dT$$
$$\Delta h = h_2 - h_1 = \int_{1}^{2} c_p(T)dT$$
Constant Specific Heats
- At low temperatures, specific heats vary with temperature, but at high temperatures, this variation is small.
$$\Delta u = u_2 - u_1 \cong c_{v, avg}(T_2 - T_1)$$
$$\Delta h = h_2 - h_1 \cong c_{p, avg}(T_2 - T_1)$$
Internal Energy, Enthalpy, and Specific Heats of Ideal Gases
- Ideal Gas Equation: $P = \rho RT$
- Enthalpy: $h = u + Pv = u + RT$.
- $c_p$ and $c_v$ are related: $c_p = c_v + R \quad (kJ/kg.K)$
- Specific heat ratio: $k = \frac{c_p}{c_v}$
The First Law for Ideal Gases
Constant Volume Process
- $Q - W = m c_v (T_2 - T_1)$
- $W = 0$
- $Q = m c_v (T_2 - T_1)$
Constant Pressure Process
- $Q - W = m c_p (T_2 - T_1)$
- $W = P(V_2 - V_1)$
- $Q = m c_p (T_2 - T_1)$
Isothermal Process
- $Q - W = m c_v (T_2 - T_1)$
- $T_1 = T_2$
- $Q = W$
- $W = P_1V_1 \ln{\frac{V_2}{V_1}} = mRT_1 \ln{\frac{V_2}{V_1}}$
Polytropic Process
- $P V^n = C$
- $W = \frac{P_2V_2 - P_1V_1}{1-n}$
- $W = \frac{mR(T_2 - T_1)}{1-n}$
Isentropic Process
- $P V^k = C$
- $W = \frac{P_2V_2 - P_1V_1}{1-k}$
- $W = \frac{mR(T_2 - T_1)}{1-k}$
Example: Electric Heating of a Gas at Constant Pressure
- 3 kg of air at 20$^\circ$C and 200 kPa is heated until the volume doubles.
- The work done by the air is 252.15 kJ; the heat transfer is 883.3 kJ.
$$\begin{aligned} Q &= m c_p (T_2 - T_1) = (3 \text{ kg}) (1.005 \frac{\text{kJ}}{\text{kg.K}}) (586 \text{ K} - 293 \text{ K}) = 883.3 \text{ kJ} \ \frac{V_1}{T_1} &= \frac{V_2}{T_2} \ T_2 &= \frac{V_2}{V_1} T_1 = 2 T_1 = 2(293 \text{ K}) = 586 \text{ K} \ W &= P(V_2 - V_1) = P(2V_1 - V_1) = PV_1 = m R T_1 \ W &= (3 \text{ kg}) (0.287 \frac{\text{kJ}}{\text{kg.K}}) (293 \text{ K}) = 252.15 \text{ kJ} \end{aligned}$$
Lecture 24: Max Flow
Max-Flow Problem
Input
- $G = (V, E)$ is a directed graph.
- Each edge $e$ has capacity $c_e > 0$.
- Source $s$ and sink $t$ are special nodes.
Goal
- Find the maximum flow from $s$ to $t$.
Definition of a Flow
Definition
- $f: E \rightarrow R^{\geq 0}$ is a function.
- Capacity constraint: $f_e \leq c_e, \forall e \in E$
- Flow conservation: $\sum_{e \text{ into } u} f_e = \sum_{e \text{ out of } u} f_e$ for every node $u \neq s, t$.
Definition
- $|f| = \sum_{e \text{ out of } s} f_e$ is the value of a flow $f$, representing the amount of flow leaving $s$.
Ford-Fulkerson Algorithm
Key Idea
- Increase the flow by sending flow along an "augmenting path".
Definition
- Given a flow $f$, an augmenting path is a path from $s$ to $t$ in the residual graph $G_f$.
Definition
- The residual graph $G_f$ has the same vertices as $G$.
- Edges in $G_f$:
- If $f_e < c_e$, there is an edge $e$ in $G_f$ with capacity $c_e - f_e$
- If $f_e > 0$, there is an edge $e'$ in $G_f$ with capacity $f_e$ (reverse edge)
Ford-Fulkerson Algorithm
- Start with $f_e = 0$ for all $e$
- While there is an augmenting path $P$ in $G_f$:
- Let $b = \min_{e \in P} $ capacity of $e$ in $G_f$
- Augment flow along $P$ by $b$
Augmenting Flow
- For each edge $e$ in $P$:
- If $e$ is a forward edge: $f_e = f_e + b$
- If $e$ is a backward edge: $f_e = f_e - b$
Limitations and Considerations
- Update $G_f$ after augmenting flow.
- Each iteration takes $O(m)$ time.
- Integer capacities increase flow by at least 1 in each iteration.
- Max flow $\leq \sum_{e \text{ out of } s} c_e$.
Theorem
- Ford-Fulkerson finds the maximum flow in $G$.
Cuts in Graphs
Definition
- A cut $(A, B)$ of a graph $G = (V, E)$ is a partition of $V$ into two sets $A$ and $B$ such that $s \in A$ and $t \in B$.
Definition
- The capacity of a cut $(A, B)$ is the sum of the capacities of the edges from $A$ to $B$: $c(A, B) = \sum_{e \text{ out of } A} c_e$
Lemma
- For any flow $f$ and any cut $(A, B)$, the net flow across $(A, B)$ is equal to $|f|$: $\sum_{e \text{ out of } A} f_e - \sum_{e \text{ into } A} f_e = |f|$
Corollary
- For any flow $f$ and any cut $(A, B)$, $|f| \leq c(A, B)$.
Max-Flow Min-Cut Theorem
- The value of the maximum flow is equal to the capacity of the minimum cut.
Summary
- The Ford-Fulkerson algorithm determines the maximum flow.
- The max-flow min-cut theorem connects maximum flow to minimum cut.
Quick Start Guide to Geometry Dash
Goal
- Complete each level by jumping and flying through obstacles, timed to the music.
Controls
- Tap, click, press spacebar, or up arrow to jump.
- Hold to jump/fly continuously.
- Release to stop jumping/flying.
Icons
- Icons customizable with colors. Unlock via gameplay or purchase.
Game Modes
- Cube, Nave, Ball, UFO, and Wave modes offer distinct gameplay.
- Robot and Spider modes unlock as you progress.
Levels
- Includes 21 official levels plus thousands of player-created levels.
Practice
- Levels have a Practice Mode where checkpoints can be placed. Does not count toward completion.
Secrets
- Hidden secrets and rewards exist. Vaults unlocked with codes on titlescreen.
Secret Coins
- Hidden in most official levels; collecting unlocks special content.
Tips
- Time jumps to the music.
- Practice to refine your skills.
- Persevere.
Lecture 14: October 24
Vector Spaces
Definition
- A vector space is a set $V$ with:
- An addition law: $+: V \times V \rightarrow V$
- A scalar multiplication law: $\cdot : \mathbb{R} \times V \rightarrow V$
Properties
- Associativity: $\mathbf{u}+(\mathbf{v}+\mathbf{w})=(\mathbf{u}+\mathbf{v})+\mathbf{w} \quad \forall \mathbf{u}, \mathbf{v}, \mathbf{w} \in V$
- Commutativity: $\mathbf{u}+\mathbf{v}=\mathbf{v}+\mathbf{u} \quad \forall \mathbf{u}, \mathbf{v} \in V$
- Neutral element: $\exists \mathbf{0} \in V$ such that $\mathbf{u}+\mathbf{0}=\mathbf{u} \quad \forall \mathbf{u} \in V$
- Inverse element: $\forall \mathbf{u} \in V, \exists(-\mathbf{u}) \in V$ such that $\mathbf{u}+(-\mathbf{u})=\mathbf{0}$
- Compatibility with scalar multiplication: $a \cdot(b \cdot \mathbf{u})=(a b) \cdot \mathbf{u} \quad \forall a, b \in \mathbb{R}, \mathbf{u} \in V$
- Distributivity w.r.t. scalar addition: $(a+b) \cdot \mathbf{u}=a \cdot \mathbf{u}+b \cdot \mathbf{u} \quad \forall a, b \in \mathbb{R}, \mathbf{u} \in V$
- Distributivity w.r.t. vector addition: $a \cdot(\mathbf{u}+\mathbf{v})=a \cdot \mathbf{u}+a \cdot \mathbf{v} \quad \forall a \in \mathbb{R}, \mathbf{u}, \mathbf{v} \in V$
- Identity: $1 \cdot \mathbf{u}=\mathbf{u} \quad \forall \mathbf{u} \in V$
Remarks
- $a \mathbf{u}$ instead of $a \cdot \mathbf{u}$.
- The element $\mathbf{0}$ is unique.
- The inverse $-\mathbf{u}$ is unique.
- $0 \cdot \mathbf{u}=\mathbf{0}$ for all $\mathbf{u} \in V$.
- We also have $(-1) \cdot \mathbf{u}=-\mathbf{u}$.
Examples
-
$V=\mathbb{R}^{n}=\left{\left(\begin{array}{c}x_{1} \ \vdots \ x_{n}\end{array}\right): x_{1}, \ldots, x_{n} \in \mathbb{R}\right}$
$\left(\begin{array}{c}x_{1} \ \vdots \ x_{n}\end{array}\right)+\left(\begin{array}{c}y_{1} \ \vdots \ y_{n}\end{array}\right)=\left(\begin{array}{c}x_{1}+y_{1} \ \vdots \ x_{n}+y_{n}\end{array}\right), \quad a\left(\begin{array}{c}x_{1} \ \vdots \ x_{n}\end{array}\right)=\left(\begin{array}{c}a x_{1} \ \vdots \ a x_{n}\end{array}\right)$
-
$V=\mathbb{R}^{\infty}=\left{\left(\begin{array}{c}x_{1} \ x_{2} \ \vdots\end{array}\right): x_{i} \in \mathbb{R}\right}$
$\left(\begin{array}{c}x_{1} \ x_{2} \ \vdots\end{array}\right)+\left(\begin{array}{c}y_{1} \ y_{2} \ \vdots\end{array}\right)=\left(\begin{array}{c}x_{1}+y_{1} \ x_{2}+y_{2} \ \vdots\end{array}\right), \quad a\left(\begin{array}{c}x_{1} \ x_{2} \ \vdots\end{array}\right)=\left(\begin{array}{c}a x_{1} \ a x_{2} \ \vdots\end{array}\right)$
-
$V={$ Polynomials with real coefficients $}$
$\left(a_{0}+a_{1} x+\cdots+a_{n} x^{n}\right)+\left(b_{0}+b_{1} x+\cdots+b_{m} x^{m}\right)=\left(a_{0}+b_{0}\right)+\left(a_{1}+b_{1}\right) x+\cdots$
$c\left(a_{0}+a_{1} x+\cdots+a_{n} x^{n}\right)=c a_{0}+c a_{1} x+\cdots+c a_{n} x^{n}$
-
$V={$ Functions $f: \mathbb{R} \rightarrow \mathbb{R}}$
$(f+g)(x)=f(x)+g(x), \quad(a f)(x)=a f(x)$
Subspaces
- A Subspace of a vector space $V$ is a subset $W \subset V$ that is also a vector space for the same operations $+, \cdot$
Theorem/Criterion
- $W \subset V$ is a subspace if and only if:
- $\mathbf{0} \in W$
- $\mathbf{u}, \mathbf{v} \in W \Rightarrow \mathbf{u}+\mathbf{v} \in W$
- $a \in \mathbb{R}, \mathbf{u} \in W \Rightarrow a \mathbf{u} \in W$
Remark
- Conditions (2) and (3) are equivalent to: $\mathbf{u}, \mathbf{v} \in W, a, b \in \mathbb{R} \Rightarrow a \mathbf{u}+b \mathbf{v} \in W$
Examples
- $V=\mathbb{R}^{2}, \quad W=\left{\left(\begin{array}{l}x \ 0\end{array}\right): x \in \mathbb{R}\right}$.
- $V=\mathbb{R}^{3}, \quad W=\left{\left(\begin{array}{l}x \ y \ 0\end{array}\right): x, y \in \mathbb{R}\right}$.
- $V=\mathbb{R}^{3}, \quad W=\left{\left(\begin{array}{l}x \ x \ x\end{array}\right): x \in \mathbb{R}\right}$.
- $V=\mathbb{R}^{3}, \quad W=\left{\left(\begin{array}{l}x \ y \ z\end{array}\right): x+y+z=0\right}$.
- $V=\mathbb{R}^{3}, \quad W=\left{\left(\begin{array}{l}x \ y \ z\end{array}\right): x+y+z=1\right}$. $\mathbf{0} \notin W$ so $W$ is not a subspace.
- $V={$ Functions $f: \mathbb{R} \rightarrow \mathbb{R}}, \quad W={$ Continuous functions $}$.
- $V={$ Functions $f: \mathbb{R} \rightarrow \mathbb{R}}, \quad W={$ Differentiable functions $}$.
- $V={$ Differentiable functions $}, \quad W={$ Functions $f$ such that $f^{\prime \prime}+f=0}$.
Linear Combinations
Definition
- Let $V$ be a vector space and $\mathbf{v}{1}, \ldots, \mathbf{v}{n} \in V$. A linear combination of $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ is a vector of the form $a_{1} \mathbf{v}{1}+\cdots+a{n} \mathbf{v}{n}$ for some $a{1}, \ldots, a_{n} \in \mathbb{R}$.
Definition
- The span of $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ is the set of all linear combinations of $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$: $\operatorname{Span}\left{\mathbf{v}{1}, \ldots, \mathbf{v}{n}\right}=\left{a_{1} \mathbf{v}{1}+\cdots+a{n} \mathbf{v}{n}: a{1}, \ldots, a_{n} \in \mathbb{R}\right}$
Theorem
$\operatorname{Span}\left{\mathbf{v}{1}, \ldots, \mathbf{v}{n}\right}$ is a subspace of $V$.
Definition
- We say that $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ span $V$ if $\operatorname{Span}\left{\mathbf{v}{1}, \ldots, \mathbf{v}{n}\right}=V$.
Examples
- $V=\mathbb{R}^{3}, \quad \mathbf{v}{1}=\left(\begin{array}{l}1 \ 0 \ 0\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{l}0 \ 1 \ 0\end{array}\right)$. Then $\operatorname{Span}\left{\mathbf{v}{1}, \mathbf{v}{2}\right}=\left{a\left(\begin{array}{l}1 \ 0 \ 0\end{array}\right)+b\left(\begin{array}{l}0 \ 1 \ 0\end{array}\right): a, b \in \mathbb{R}\right}=\left{\left(\begin{array}{l}a \ b \ 0\end{array}\right): a, b \in \mathbb{R}\right}$.
- $\mathbf{e}{1}=\left(\begin{array}{l}1 \ 0\end{array}\right), \mathbf{e}{2}=\left(\begin{array}{l}0 \ 1\end{array}\right) \operatorname{span} \mathbb{R}^{2}$. Indeed, $\left(\begin{array}{l}x \ y\end{array}\right)=x \mathbf{e}{1}+y \mathbf{e}{2}$.
- $\mathbf{e}{1}=\left(\begin{array}{l}1 \ 0 \ 0\end{array}\right), \mathbf{e}{2}=\left(\begin{array}{l}0 \ 1 \ 0\end{array}\right), \mathbf{e}{3}=\left(\begin{array}{l}0 \ 0 \ 1\end{array}\right) \operatorname{span} \mathbb{R}^{3}$. Indeed, $\left(\begin{array}{l}x \ y \ z\end{array}\right)=x \mathbf{e}{1}+y \mathbf{e}{2}+z \mathbf{e}{3}$.
- $\mathbf{v}{1}=\left(\begin{array}{l}1 \ 1\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{c}1 \ -1\end{array}\right) \operatorname{span} \mathbb{R}^{2}$.
- $\mathbf{v}{1}=\left(\begin{array}{l}1 \ 1\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{l}2 \ 2\end{array}\right)$ do not span $\mathbb{R}^{2}$. $\operatorname{Span}\left{\mathbf{v}{1}, \mathbf{v}{2}\right}$ is the line $y=x$, which is not equal to $\mathbb{R}^{2}$.
- $V={$ Polynomials $}, \quad \mathbf{v}{1}=1, \mathbf{v}{2}=x, \mathbf{v}{3}=x^{2}, \ldots$ Then $\mathbf{v}{1}, \mathbf{v}{2}, \mathbf{v}{3}, \ldots$ span $V$.
Linear Independence
Definition
- Let $V$ be a vector space and $\mathbf{v}{1}, \ldots, \mathbf{v}{n} \in V$. We say that $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ are linearly independent if the only solution to the equation $a_{1} \mathbf{v}{1}+\cdots+a{n} \mathbf{v}{n}=\mathbf{0}$ is $a{1}=a_{2}=\cdots=a_{n}=0$. If $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ are not linearly independent, then we say that they are linearly dependent.
Examples
- $\mathbf{e}{1}=\left(\begin{array}{l}1 \ 0\end{array}\right), \mathbf{e}{2}=\left(\begin{array}{l}0 \ 1\end{array}\right)$ are linearly independent in $\mathbb{R}^{2}$.
- $\mathbf{e}{1}=\left(\begin{array}{l}1 \ 0 \ 0\end{array}\right), \mathbf{e}{2}=\left(\begin{array}{l}0 \ 1 \ 0\end{array}\right), \mathbf{e}_{3}=\left(\begin{array}{l}0 \ 0 \ 1\end{array}\right)$ are linearly independent in $\mathbb{R}^{3}$.
- $\mathbf{v}{1}=\left(\begin{array}{l}1 \ 1\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{l}2 \ 2\end{array}\right)$ are linearly dependent in $\mathbb{R}^{2}$. Indeed, notice that $\mathbf{v}{2}=2 \mathbf{v}{1}$, so $-2 \mathbf{v}{1}+\mathbf{v}{2}=\mathbf{0}$.
- $\mathbf{v}{1}=\left(\begin{array}{l}1 \ 1\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{c}1 \ -1\end{array}\right)$ are linearly independent in $\mathbb{R}^{2}$.
- $\mathbf{v}{1}=\left(\begin{array}{l}1 \ 0 \ 0\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{l}1 \ 1 \ 0\end{array}\right), \mathbf{v}_{3}=\left(\begin{array}{l}1 \ 1 \ 1\end{array}\right)$ are linearly independent in $\mathbb{R}^{3}$.
- $V={$ Functions $f: \mathbb{R} \rightarrow \mathbb{R}}, \quad f(x)=x, g(x)=x^{2}$ $f$ and $g$ are linearly independent.
Theorem
- $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ are linearly dependent if and only if one of the vectors is a linear combination of the others.
Basis
Definition
- A basis of a vector space $V$ is a set of vectors $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ such that:
- $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ are linearly independent.
- $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ span $V$.
Examples
- $\mathbf{e}{1}=\left(\begin{array}{l}1 \ 0\end{array}\right), \mathbf{e}{2}=\left(\begin{array}{l}0 \ 1\end{array}\right)$ is a basis of $\mathbb{R}^{2}$.
- $\mathbf{e}{1}=\left(\begin{array}{l}1 \ 0 \ 0\end{array}\right), \mathbf{e}{2}=\left(\begin{array}{l}0 \ 1 \ 0\end{array}\right), \mathbf{e}_{3}=\left(\begin{array}{l}0 \ 0 \ 1\end{array}\right)$ is a basis of $\mathbb{R}^{3}$.
- $\mathbf{v}{1}=\left(\begin{array}{l}1 \ 1\end{array}\right), \mathbf{v}{2}=\left(\begin{array}{c}1 \ -1\end{array}\right)$ is a basis of $\mathbb{R}^{2}$.
- $V={$ Polynomials $}, \quad \mathbf{v}{1}=1, \mathbf{v}{2}=x, \mathbf{v}_{3}=x^{2}, \ldots$ is a basis of $V$.
Definition
- We say that $V$ is finite-dimensional if it has a basis with a finite number of vectors.
Theorem
- If $V$ is finite-dimensional, then all bases of $V$ have the same number of vectors.
Definition
- The dimension of a finite-dimensional vector space $V$ is the number of vectors in any basis of $V$.
Examples
- $\operatorname{dim} \mathbb{R}^{2}=2$
- $\operatorname{dim} \mathbb{R}^{3}=3$
- $\operatorname{dim}{$ Polynomials of degree $\leq n}=n+1$ (basis: $1, x, x^{2}, \ldots, x^{n}$ )
- $\mathbb{R}^{\infty}$ is not finite-dimensional.
Theorem
- Let $V$ be a vector space of dimension $n$.
- If $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ are linearly independent, then they span $V$ (so they form a basis).
- If $\mathbf{v}{1}, \ldots, \mathbf{v}{n}$ span $V$, then they are linearly independent (so they form a basis).
- If $\mathbf{v}{1}, \ldots, \mathbf{v}{m}$ are linearly independent and $m
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.