Skip to content

Commit

Permalink
Complete course introductions
Browse files Browse the repository at this point in the history
Signed-off-by: zeramorphic <[email protected]>
  • Loading branch information
zeramorphic committed Aug 16, 2023
1 parent 08201a6 commit c20de33
Show file tree
Hide file tree
Showing 12 changed files with 84 additions and 23 deletions.
14 changes: 13 additions & 1 deletion ib/antop/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,18 @@
\chapter[Analysis and Topology \\ \textnormal{\emph{Lectured in Michaelmas \oldstylenums{2021} by \textsc{Dr.\ V.\ Zs\'ak}}}]{Analysis and Topology}
\emph{\Large Lectured in Michaelmas \oldstylenums{2021} by \textsc{Dr.\ V.\ Zs\'ak}}

[[INTRODUCTION]]
In the analysis part of the course, we continue the study of convergence from Analysis I.
We define a stronger version of convergence, called uniform convergence, and show that it has some very desirable properties.
For example, if integrable functions \( f_n \) converge uniformly to the integrable function \( f \), then the integrals of the \( f_n \) converge to the integral of \( f \).
The same cannot be said in general about non-uniform convergence.
We also extend our study of differentiation to functions with multiple input and output variables, and rigorously define the derivative in this higher-dimensional context.

In the topology part of the course, we consider familiar spaces such as \( [a,b], \mathbb C, \mathbb R^n \), and generalise their properties.
We arrive at the definition of a metric space, which encapsulates all of the information about how near or far points are from others.
From here, we can define notions such as continuous functions between metric spaces in such a way that does not depend on the underlying space.

We then generalise even further to define topological spaces.
The only information a topological space contains is the neighbourhoods of each point, but it turns out that this is still enough to define continuous functions and similar things.
We study topological spaces in an abstract setting, and prove important facts that are used in many later courses.

\subfile{../../ib/antop/main.tex}
8 changes: 7 additions & 1 deletion ib/ca/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
\chapter[Complex Analysis \\ \textnormal{\emph{Lectured in Lent \oldstylenums{2022} by \textsc{Prof.\ N.\ Wickramasekera}}}]{Complex Analysis}
\emph{\Large Lectured in Lent \oldstylenums{2022} by \textsc{Prof.\ N.\ Wickramasekera}}

[[INTRODUCTION]]
Complex differentiation is a stronger notion than real differentiation.
Many functions that are differentiable as a function of two real variables are not complex differentiable, for example the complex conjugate function.
This stronger notion allows us to prove some surprising results.
It turns out that if a function is complex differentiable once in a neighbourhood of a point, then it is given by a convergent power series in some neighbourhood of that point.

Another interesting result is Cauchy's integral formula: if a function is complex differentiable in a neighbourhood around a point, one can evaluate the function at that point using a certain integral over any loop around that point.
A similar result can be used to obtain an arbitrary derivative of a function at a point by using a single integral.

\subfile{../../ib/ca/main.tex}
6 changes: 5 additions & 1 deletion ib/geom/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
\chapter[Geometry \\ \textnormal{\emph{Lectured in Lent \oldstylenums{2022} by \textsc{Prof.\ I.\ Smith}}}]{Geometry}
\emph{\Large Lectured in Lent \oldstylenums{2022} by \textsc{Prof.\ I.\ Smith}}

[[INTRODUCTION]]
This course serves as an introduction to the modern study of surfaces in geometry.
A surface is a topological space that locally looks like the plane.
The notions of length and area on a surface are governed by mathematical objects called the fundamental forms of the surface at particular points.
We can use integrals to work out exact lengths and areas.
We study various spaces, including spaces of constant curvature, such as the plane, spheres, and hyperbolic space.

\subfile{../../ib/geom/main.tex}
10 changes: 9 additions & 1 deletion ib/grm/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@
\chapter[Groups, Rings and Modules \\ \textnormal{\emph{Lectured in Lent \oldstylenums{2022} by \textsc{Dr.\ R.\ Zhou}}}]{Groups, Rings and Modules}
\emph{\Large Lectured in Lent \oldstylenums{2022} by \textsc{Dr.\ R.\ Zhou}}

[[INTRODUCTION]]
A ring is a an algebraic structure with an addition and multiplication operation.
Common examples of rings include \( \mathbb Z, \mathbb Q, \mathbb R, \mathbb C \), the Gaussian integers \( \mathbb Z[i] = \qty{a + bi \mid a, b \in \mathbb Z} \), the quotient \( \faktor{\mathbb Z}{n\mathbb Z} \), and the set of polynomials with complex coefficients.
We can study factorisation in a general ring, generalising the idea of factorising integers or polynomials.
Certain rings, called unique factorisation domains, have the property like the integers that every nonzero non-invertible element can be expressed as a unique product of irreducibles (in \( \mathbb Z \), the irreducibles are the prime numbers).
This property, and many others, are studied in this course.

Modules are like vector spaces, but instead of being defined over a field, they are defined over an arbitrary ring.
In particular, every vector space is a module, because every field is a ring.
We use the theory built up over the course to prove that every \( n \)-dimensional complex matrix can be written in Jordan normal form.

\subfile{../../ib/grm/main.tex}
7 changes: 6 additions & 1 deletion ib/linalg/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
\chapter[Linear Algebra \\ \textnormal{\emph{Lectured in Michaelmas \oldstylenums{2021} by \textsc{Prof.\ P.\ Raphael}}}]{Linear Algebra}
\emph{\Large Lectured in Michaelmas \oldstylenums{2021} by \textsc{Prof.\ P.\ Raphael}}

[[INTRODUCTION]]
Linear algebra is the field of study that deals with vector spaces and linear maps.
A vector space can be thought of as a generalisation of \( \mathbb R^n \) or \( \mathbb C^n \), although they can be based off any field (not just \( \mathbb R \) or \( \mathbb C \)), and may have infinitely many dimensions.
In this course, we mainly study finite-dimensional vector spaces and the linear functions between them.
Any linear map between finite-dimensional vector spaces can be encoded as a matrix.
Such maps have properties such as their trace and determinant, which can be easily obtained from a matrix representing them.
As was shown for real matrices in Vectors and Matrices, if the determinant of a matrix is nonzero it can be inverted.

\subfile{../../ib/linalg/main.tex}
15 changes: 8 additions & 7 deletions ib/markov/01_introduction.tex
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@ \subsection{Definition}
Let \( I \) be a finite or countable set.
All of our random variables will be defined on the same probability space \( (\Omega, \mathcal F, \mathbb P) \).
\begin{definition}
A stochastic process \( (X_n)_{n \geq 0} \) is called a \textit{Markov chain} if \( \forall n \geq 0 \) and for \( x_1 \dots x_{n+1} \in I \),
A stochastic process \( (X_n)_{n \geq 0} \) is called a \textit{Markov chain} if for all \( n \geq 0 \) and for all \( x_1 \dots x_{n+1} \in I \),
\[
\prob{X_{n+1} = x_{n+1} \mid X_n = x_n, \dots,X_1 = x_1} = \prob{X_{n+1} = x_{n+1} \mid X_n = x_n}
\]
\end{definition}
We can think of \( n \) as a discrete measure of time.
If \( \prob{X_{n+1} = y \mid X_n = x} \) for all \( x, y \) is independent of \( n \), then \( X \) is called time-homogeneous.
If \( \prob{X_{n+1} = y \mid X_n = x} \) for all \( x, y \) is independent of \( n \), then \( X \) is called a time-homogeneous Markov chain.
Otherwise, \( X \) is called time-inhomogeneous.
In this course, we only study time-homogeneous Markov chains.
If we consider time-homogeneous chains only, we may as well take \( n = 0 \) and we can write
If we consider only time-homogeneous chains, we may as well take \( n = 0 \) and we can write
\[
P(x,y) = \prob{X_1 = y \mid X_0 = x};\quad \forall x,y \in I
\]
Expand All @@ -24,7 +24,7 @@ \subsection{Definition}
\sum_{y \in I} P(x,y) = 1
\]
\begin{remark}
The index set does not need to be \( \mathbb N \); it could alternatively be \( \qty{0,1,\dots,N} \) for \( N \in \mathbb N \).
The index set does not need to be \( \mathbb N \); it could alternatively be the set \( \qty{0,1,\dots,N} \) for \( N \in \mathbb N \).
\end{remark}
We say that \( X \) is \(\Markov{\lambda, P}\) if \( X_0 \) has distribution \(\lambda\), and P is the transition matrix.
Hence,
Expand Down Expand Up @@ -79,9 +79,10 @@ \subsection{Independence of sequences}
\]
Let \( X = (X_n), Y = (Y_n) \) be sequences of discrete random variables.
They are independent if for all \(k,m\), \( i_1 < \dots < i_k \), \( j_1 < \dots < j_m \),
\[
\prob{X_1 = x_1, \dots, X_{i_k} = x_{i_k}, Y_{j_1} = y_{j_1}, \dots, Y_{j_m}} = \prob{X_1 = x_1, \dots, X_{i_k} = x_{i_k}} \prob{Y_{j_1} = y_{j_1}, \dots, Y_{j_m}}
\]
\begin{align*}
&prob{X_1 = x_1, \dots, X_{i_k} = x_{i_k}, Y_{j_1} = y_{j_1}, \dots, Y_{j_m}} \\
&= \prob{X_1 = x_1, \dots, X_{i_k} = x_{i_k}} \prob{Y_{j_1} = y_{j_1}, \dots, Y_{j_m}}
\end{align*}

\subsection{Simple Markov property}
\begin{theorem}
Expand Down
15 changes: 9 additions & 6 deletions ib/markov/05_invariant_distributions.tex
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,8 @@ \subsection{Aperiodicity}
\end{lemma}
\begin{proof}
First, if \( P^n(i,i)>0 \) for all \( n \) sufficiently large, the greatest common divisor of all sufficiently large numbers is one so this direction is trivial.
Conversely, let \( D(i) = \qty{n \geq 1 \colon P^n(i,i) > 0} \).
Conversely, let
\[ D(i) = \qty{n \geq 1 \colon P^n(i,i) > 0} \]
Observe that if \( a, b \in D(i) \) then \( a + b \in D(i) \).

We claim that \( D(i) \) contains two consecutive integers.
Expand All @@ -426,7 +427,8 @@ \subsection{Aperiodicity}
This is a contradiction, since we have found two points in \( D(i) \) with a distance smaller than the minimal distance.

Now, let \( n_1, n_1 + 1 \) be elements of \( D(i) \).
Then \( \qty{x n_1 + y(n_1 + 1) \colon x,y \in \mathbb N } \subseteq D(i) \).
Then
\[ \qty{x n_1 + y(n_1 + 1) \colon x,y \in \mathbb N } \subseteq D(i) \]
It is then easy to check that \( D(i) \supseteq \qty{n \colon n \geq n_1^2} \).
\end{proof}
\begin{lemma}
Expand All @@ -445,8 +447,7 @@ \subsection{Aperiodicity}

\subsection{Positive recurrent limiting behaviour}
\begin{theorem}
Let \( P \) be irreducible and aperiodic with invariant distribution \( \pi \).
Let \( X \sim \Markov{\lambda, P} \).
Let \( P \) be irreducible and aperiodic with invariant distribution \( \pi \), and further let \( X \sim \Markov{\lambda, P} \).
Then for all \( y \in I \), \( \prob{X_n = y} \to \pi_y \) as \( n \to \infty \).
Taking \( \lambda = \delta_x \), we get \( p_{xy}(n) \to \pi(y) \) as \( n \to \infty \).
\end{theorem}
Expand Down Expand Up @@ -501,7 +502,8 @@ \subsection{Positive recurrent limiting behaviour}
Let \( A = \qty{Z_{n-1} = z_{n-1}, \dots, Z_0 = z_0} \).
We need to show \( \prob{Z_{n+1} = y \mid Z_n = x, A} = P(x,y) \).
\begin{align*}
\prob{Z_{n+1} = y \mid Z_n = x, A} & = \prob{Z_{n+1} = y, T > n \mid Z_n = x, A} + \prob{Z_{n+1} = y, T \leq n \mid Z_n = x, A} \\
\prob{Z_{n+1} = y \mid Z_n = x, A} & = \prob{Z_{n+1} = y, T > n \mid Z_n = x, A} \\
& + \prob{Z_{n+1} = y, T \leq n \mid Z_n = x, A} \\
& = \prob{X_{n+1} = y \mid T > n, Z_n = x, A} \prob{T > n \mid Z_n = x, A} \\
& + \prob{Y_n+1 = y \mid T \leq n, Z_n = x, A} \prob{T \leq n \mid Z_n = x, A}
\end{align*}
Expand Down Expand Up @@ -589,7 +591,8 @@ \subsection{Null recurrent limiting behaviour}
\[
\mu P^n(z) \leq \frac{1}{\nu_y(A)} \nu_y(z) = \frac{\nu_y(z)}{\nu_y(A)}
\]
Let \( (X, Y) \) be a Markov chain with matrix \( \widetilde P \), started according to \( \mu \times \delta_x \), so \( \prob{X_0 = z, Y_0 = w} = \mu(z) \delta_x(w) \).
Let \( (X, Y) \) be a Markov chain with matrix \( \widetilde P \), started according to \( \mu \times \delta_x \), so
\[ \prob{X_0 = z, Y_0 = w} = \mu(z) \delta_x(w) \]
Now, let
\[
T = \inf\qty{n \geq 1 \colon (X_n, Y_n) = (x,x)}
Expand Down
7 changes: 6 additions & 1 deletion ib/markov/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
\chapter[Markov Chains \\ \textnormal{\emph{Lectured in Michaelmas \oldstylenums{2021} by \textsc{Dr.\ P.\ Sousi}}}]{Markov Chains}
\emph{\Large Lectured in Michaelmas \oldstylenums{2021} by \textsc{Dr.\ P.\ Sousi}}

[[INTRODUCTION]]
A Markov chain is a common type of random process, where each state in the process depends only on the previous one.
Due to their simplicity, Markov processes show up in many areas of probability theory and have lots of real-world applications, for example in computer science.

One example of a Markov chain is a simple random walk, where a particle moves around an infinite lattice of points, choosing its next direction to move at random.
It turns out that if the lattice is one- or two-dimensional, the particle will return to its starting point infinitely many times, with probability 1.
However, if the lattice is three-dimensional or higher, the particle has probability 0 of ever returning to its starting point.

\subfile{../../ib/markov/main.tex}
10 changes: 9 additions & 1 deletion ib/methods/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,14 @@
\chapter[Methods \\ \textnormal{\emph{Lectured in Michaelmas \oldstylenums{2021} by \textsc{Prof.\ E.\ P.\ Shellard}}}]{Methods}
\emph{\Large Lectured in Michaelmas \oldstylenums{2021} by \textsc{Prof.\ E.\ P.\ Shellard}}

[[INTRODUCTION]]
In this course, we discuss various methods for solving differential equations.
Different forms of differential equations need different solution strategies, and we study a wide range of common types of differential equation.

A particularly powerful method for solving differential equations involves the use of Green's functions.
For example, physical systems can involve bodies spread over space with constant density.
Green's functions allow the equation to be solved for a point mass, and then integrated to find the solution for the larger body.

Fourier transforms are another way to solve differential equations.
Sometimes a differential equation is easier to solve after applying the Fourier transform to the relevant function, then the inverse Fourier transform recovers the solution to the original equation.

\subfile{../../ib/methods/main.tex}
5 changes: 4 additions & 1 deletion ib/quantum/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
\chapter[Quantum Mechanics \\ \textnormal{\emph{Lectured in Michaelmas \oldstylenums{2021} by \textsc{Dr.\ M.\ Ubiali}}}]{Quantum Mechanics}
\emph{\Large Lectured in Michaelmas \oldstylenums{2021} by \textsc{Dr.\ M.\ Ubiali}}

[[INTRODUCTION]]
In this course, we explore the basics of quantum mechanics using the Schr\"odinger equation.
This equation explains how a quantum wavefunction changes over time.
By solving the Schr\"odinger equation with different inputs and boundary conditions, we can understand some of the ways in which quantum mechanics differs from classical physics, explaining some of the scientific discoveries of the past century.
We prove some theoretical facts about quantum operators and observables, such as the uncertainty theorem, which roughly states that it is impossible to know both the position and momentum of a particle.

\subfile{../../ib/quantum/main.tex}
2 changes: 1 addition & 1 deletion ib/stats/06_normal_linear_model.tex
Original file line number Diff line number Diff line change
Expand Up @@ -526,7 +526,7 @@ \subsection{Inference}
\end{example}
The above two results are exact; no approximations were made.

\subsection{F-tests}
\subsection{\texorpdfstring{\( F \)}{F}-tests}
We wish to test whether a collection of predictors \( \beta_i \) are equal to zero.
Without loss of generality, we will take the first \( p_0 \leq p \) predictors.
We have \( H_0 \colon \beta_1 = \dots = \beta_{p_0} = 0 \), and \( H_1 = \beta \in \mathbb R^p \).
Expand Down
8 changes: 7 additions & 1 deletion ib/stats/import.tex
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
\chapter[Statistics \\ \textnormal{\emph{Lectured in Lent \oldstylenums{2022} by \textsc{Dr.\ S.\ Bacallado}}}]{Statistics}
\emph{\Large Lectured in Lent \oldstylenums{2022} by \textsc{Dr.\ S.\ Bacallado}}

[[INTRODUCTION]]
An estimator is a random variable that approximates a parameter.
For instance, the parameter could be the mean of a normal distribution, and the estimator could be a sample mean.
In this course, we study how estimators behave, what properties they have, and how we can use them to make conclusions about the real parameters.
This is called parametric inference: the study of inferring parameters from statistics of sample data.

Towards the end of the course, we study the normal linear model, which is a useful way to model data that is believed to depend linearly on a vector of inputs, together with some normally distributed noise.
Even nonlinear patterns can be analysed using this model, by letting the inputs to the model be polynomials in the real-world data.

\subfile{../../ib/stats/main.tex}

0 comments on commit c20de33

Please sign in to comment.