#art #mathematic # Mathematical paradoxes ![[DALL·E 2022-07-27 20.37.21 - An infinitely recursive labyrinth in the multiverse, digital art by Escher.png]] $ \begin{equation*} \mathbf{\int_0^\infty \frac{\sin x}{x} \, dx = \frac{\pi}{2}} \end{equation*} $ $\begin{equation*} \frac{1}{2} \cdot \frac{2}{3} \cdot \frac{3}{4} \cdot \frac{4}{5} \cdot \frac{5}{6} \cdot \frac{6}{7} \cdot \frac{7}{8} \cdot \frac{8}{9} \cdot \frac{9}{10} \cdot \frac{10}{11} \cdot \frac{11}{12} \cdot \frac{12}{13} \cdot \frac{13}{14} \cdot \frac{14}{15} \cdot \frac{15}{16} \cdot \frac{16}{17} \cdot \frac{17}{18} \cdot \frac{18}{19} \cdot \frac{19}{20} \cdot \frac{20}{21} \end{equation*} $ ![[DALL·E 2022-07-27 20.39.04 - An infinitely recursive labyrinth, walls being human arms, digital art by Escher.png]] $ \begin{equation*} \mathbf{\int_0^\infty \frac{\sin x}{x} \, dx = \frac{\pi}{2}} \end{equation*} $ $ \begin{equation*} \mathbf{\int_0^\infty e^{-x} \, dx = 1} \end{equation*} $ ![[DALL·E 2022-07-27 20.39.59 - An infinitely recursive labyrinth, walls made of redish human arms, digital art by Escher.png]] $ \begin{equation*} \mathbf{\int_0^\infty x^n e^{-x} \, dx = n!} \end{equation*} $ $ \begin{equation*} \mathbf{\int_0^\infty x e^{-x^2} \, dx = \frac{\sqrt{\pi}}{2}} \end{equation*} $ ![[DALL·E 2022-07-27 20.45.02 - An infinitely recursive labyrinth made of terrifying tree branches, painting by Picasso.png]] $ \begin{equation*} \mathbf{\int_0^\infty x^n e^{-x^2} \, dx = \frac{(2n-1)!!}{2^n} \sqrt{\pi}} \end{equation*} $ $ \begin{equation*} {A_1} = \frac{{{R_1}}}{{{R_1} + {R_2}}}, \end{equation*} $ ![[DALL·E 2022-07-27 20.45.42 - An infinitely recursive labyrinth made of terrifying tree branches, painting by Escher.png]] $ \begin{equation*} {A_2} = \frac{{{R_2}}}{{{R_1} + {R_2}}}. \end{equation*} $ $ \begin{equation*} {V_{in}} = {V_1} + {V_2}, \end{equation*} $ ![[DALL·E 2022-07-27 20.51.17.png]] $ \begin{equation*} {V_1} = {A_1}{V_{in}}, \end{equation*} $ $ \begin{equation*} {V_2} = {A_2}{V_{in}}. \end{equation*} $ ![[DALL·E 2022-07-27 20.53.55 - A mathematical paradox in the darkest forest, upside down trees, painting by Escher.png]] $ \begin{equation*} \frac{d\sigma}{dt} = \frac{d\sigma}{d\theta} \frac{d\theta}{dt} = -\frac{mgl\sin\theta}{I} \frac{d\theta}{dt} = \frac{mgl\sin\theta}{I} \omega \end{equation*} $ $ \begin{equation*} \frac{d\sigma}{dt} = \frac{d\sigma}{d\theta} \frac{d\theta}{dt} = -\frac{mgl\sin\theta}{I} \frac{d\theta}{dt} = \frac{mgl\sin\theta}{I} \omega \end{equation*} $ $ \begin{equation*} \frac{d\sigma}{dt} = \frac{d\sigma}{d\theta} \frac{d\theta}{dt} = -\frac{mgl\sin\theta}{I} \frac{d\theta}{dt} = -\frac{mgl}{I} \sin\theta \frac{d\theta}{dt} \end{equation*} $ We can now substitute the expression for the moment of inertia into the above expression for the torque. $ \begin{equation*} \frac{d\sigma}{dt} = -\frac{mgl}{\frac{1}{3}ml^2} \sin\theta \frac{d\theta}{dt} = -\frac{3mgl}{l^2} \sin\theta \frac{d\theta}{dt} = -3gl \sin\theta \frac{d\theta}{dt} \end{equation*} $ We can now separate the time derivative from the rest of the expression. $ \begin{equation*} \frac{d\sigma}{dt} = -3gl \sin\theta \frac{d\theta}{dt} \end{equation*} $ $ \begin{equation*} \frac{d\sigma}{d\Omega} = \frac{1}{64\pi^2s} \sum_{\substack{spin}}\sum_{\substack{color}} \left|\mathcal{M}\right|^2 \end{equation*} $ Where $s$ is the Mandelstam variable. We can calculate the differential cross section for a process by first calculating the matrix element $\mathcal{M}$ and then summing over all the possible initial and final state configurations. For example, let's look at the process $e^+e^-\rightarrow\mu^+\mu^-$. This is a very simple process that can be calculated in a similar way to the $e^+e^-\rightarrow e^+e^-$ process. The matrix element is given by $ \begin{equation*} \mathcal{M} = \bar{u}(p_1)\gamma^{\mu}v(p_2)\bar{v}(p_3)\gamma_{\mu}u(p_4) \end{equation*} $ Where $p\in \mathcal{P}$ and $p_0 \in \mathcal{P}$ is the reference parameter. ![[DALL·E 2022-07-27 20.54.58 - A mathematical paradox in the city, upside down buildings, painting by Escher.png]] ## Using the Taylor series to approximate the sensitivity Let's say we want to know the sensitivity of $f$ to a change in $p$, where $p$ is an element of the parameter set $\mathcal{P}$. We can use the Taylor series to approximate the sensitivity as follows: $ \begin{equation*} f(p) \approx f(p_0) + \frac{\partial f}{\partial p}|_{p_0}(p - p_0) \end{equation*} $ The first term on the right side of the equation is the value of the model output at the reference parameter (i.e. the nominal value). The second term is the sensitivity of the model output to a change in $p$, where $p$ is an element of the parameter set $\mathcal{P}$. ![[DALL·E 2022-07-27 21.52.00 - A mathematical paradox in the Eden garden, painting by Escher.png]] ## Computing the sensitivity Let's define the function below: $ \begin{equation*} f(p) = p^2 \end{equation*} $ $ \begin{equation*} \frac{dy}{dx} = \frac{1}{1 + e^{-x}} \end{equation*} $ $ \begin{equation*} \frac{d^2y}{dx^2} = \frac{e^{-x}}{(1 + e^{-x})^2} \end{equation*} $ $ \begin{equation*} \frac{d^3y}{dx^3} = \frac{-2e^{-2x} - e^{-x}}{(1 + e^{-x})^3} \end{equation*} $ $ \begin{equation*} \frac{d^4y}{dx^4} = \frac{6e^{-3x} + 11e^{-2x} + 6e^{-x}}{(1 + e^{-x})^4} \end{equation*} $ $ \begin{equation*} \frac{d^5y}{dx^5} = \frac{-24e^{-4x} - 50e^{-x}}{4} \end{equation*} $ $ \begin{equation*} \frac{d^6y}{dx^6} = \frac{-25e^{-4x} - 36e^{- x}}{4} \end{equation*} $ $ \begin{equation*} \frac{d^7y}{dx^7} = \frac{-12e^{-4x} - 13e^{- x}}{4} \end{equation*} $ $ \begin{equation*} \frac{d^8y}{dx^8} = \frac{-3e^{-4x} - 2e^{- x}}{4} \end{equation*} $ $ \begin{equation*} \frac{d^9y}{dx^9} = \frac{-1e^{-4x}}{4} \end{equation*} $ $ \begin{equation*} \frac{d^{10}y}{dx^{10}} = 0 \end{equation*} $ Relativity $ \begin{equation*}\begin{array}{|c|c|c|c|} \hline \text{Object} & \text{Proper Length} & \text{Proper Time} & \text{Proper Mass} \\ \hline \text{Observer} & L_0 & T_0 & M_0 \\ \hline \text{Moving Object} & L_1 & T_1 & M_1 \\ \hline \end{array} \end{equation*} $ The relationship between the proper lengths and proper times of the observer and moving object are given by the Lorentz transformation equations: $ \begin{equation*} L' = \frac{L}{\gamma} \end{equation*} $ $ \begin{equation*} \Delta t' = \frac{\Delta t}{\gamma} \end{equation*} $ The Lorentz transformation equations are derived from the invariance of the speed of light in all reference frames, and the fact that the speed of light is the same in all reference frames. ![[DALL·E 2022-07-29 21.21.09 - Einstein lost in a mathematical paradox, painting by Picasso.png]] The speed of light is the same in all reference frames: $ \begin{equation*} c = \frac{L}{\Delta t} = \frac{L'}{\Delta t'} \end{equation*} $ Substituting the Lorentz transformation equations: $ \begin{equation*} c = \frac{\frac{L}{\gamma}}{\frac{\Delta t}{\gamma}} = \frac{L'}{\Delta t'} \end{equation*} $ This is the same expression as before but with the Lorentz-transformed quantities. $ \begin{equation*} \frac{L}{\Delta t} = \frac{L'}{\Delta t'} \end{equation*} $ This is the same expression as before, which is the definition of the speed of light. $ \begin{equation*} c = \frac{L}{\Delta t} \end{equation*} $ This is the **Courant-Friedrichs-Lewy** (CFL) **condition**. The CFL condition is a necessary condition for convergence. The CFL number is a measure of the **stability** of a numerical scheme. The CFL number is a measure of the **accuracy** of a numerical scheme. The CFL number is a measure of the **efficiency** of a numerical scheme. ![[DALL·E 2022-07-29 21.22.25 - Feynman lost in a mathematical paradox, painting by Picasso.png]] What can we do if the CFL number is too large? * choose a smaller time step $\Delta t$ * choose a smaller grid spacing $\Delta x$ * choose a more accurate numerical scheme ## Example: CFL number for the wave equation The CFL number for the wave equation is $ \begin{equation*} c\frac{\Delta t}{\Delta x} = \frac{L}{\Delta t} \end{equation*} $ Let's say we want to use a time step $\Delta t = 0.1$ and the wave speed is $c=1$. What is the maximum grid spacing $\Delta x$ we can use? Feynman: "If you can't understand it, you can't build it." ![[DALL·E 2022-07-29 21.26.20.png]] The Courant condition We can use a smaller grid spacing, but that is not very efficient. Instead, we can use a different method to solve the wave equation, which is called the Leapfrog method. The Leapfrog method ![[DALL·E 2022-07-29 21.31.12 - The Leapfrog method by Escher.png]] The Leapfrog method is an explicit method to solve the wave equation. It is a second-order accurate method. It is based on the following observations: The spatial derivative is approximated by a central difference. The time derivative is approximated by a forward difference. The initial conditions are given by the values at $t=0$. This gives the following update formula for the Leapfrog method: $u^{n+1}_i = u^n_i + \Delta t \frac{u^{n+1}_{i+1} - u^{n+1}_{i-1}}{2 \Delta x}$ The update formula uses the value at $t=n+1$ on the right-hand side, which is not known. We can solve this by starting the update at $t=1$ instead of $t=0$. This means that we have to use the forward difference on the left-hand side, and the central difference on the right-hand side. $u^{n+1}_i = u^n_i + \Delta t \frac{u^n_{i+1} - u^n_{i-1}}{2 \Delta x}$ This is known as a *leapfrog* scheme. It is second-order accurate in space and time, which is the same as the Lax-Friedrichs scheme. For the second-order Runge-Kutta scheme, we have the following update: $u^{n+1}_i = u^n_i + \Delta t \frac{u^n_{i+1} - u^n_{i-1}}{2 \Delta x} + \frac{\Delta t^2}{2 \Delta x^2} \left( u^n_{i+1} - 2 u^n_i + u^n_{i-1} \right)$ This is known as a *leapfrog-RK2* scheme. It is second-order accurate in space and time, but it is not stable. For the third-order Runge-Kutta scheme, we have the following update: $u^{n+1}_i = u^n_i + \Delta t \frac{u^n_{i+1} - u^n_{i-1}}{2 \Delta x} + \frac{\Delta t^2}{2 \Delta x^2} \left( u^n_{i+1} - 2 u^n_i + u^n_{i-1} \right) + \frac{\Delta t^3}{6} \left( \frac{u^n_{i+1} - u^n_{i-1}}{\Delta x^2} \right)^2 + \frac{\Delta t^3}{6} \left( \frac{u^n_{i+2} - u^n_{i-2}}{4 \Delta x} \right)^2$ ![[DALL·E 2022-07-29 21.34.43 - Paradoxal symmetry by Escher.png]] We can also write this in a more compact form by defining $\begin{align} \tilde{u}^n_i &= u^n_i + \Delta t \frac{u^n_{i+1} - u^n_{i-1}}{2 \Delta x} + \frac{\Delta t^2}{2 \Delta x^2} \left( u^n_{i+1} - 2 u^n_i + u^n_{i-1} \right) \\ u^{n+1}_i &= \tilde{u}^n_i + \frac{\Delta t^3}{6} \left( \frac{\tilde{u}^n_{i+1} - \tilde{u}^n_{i-1}}{2 \Delta x} \right) - \frac{\Delta t^4}{12} \left( \frac{\tilde{u}^n_{i+1} - 2 \tilde{u}^n_i + \tilde{u}^n_{i-1}}{\Delta x^2} \right) \end{align}$ We can see that the first term in the final line is the same as the first term in the first line, with the only difference being that we have used the improved estimate $\tilde{u}^n_i$ instead of the original estimate $u^n_i$. The second term in the final line is the same as the second term in the first line, with the only difference being that we have used the improved estimate $\tilde{u}^n_i$ instead of the original estimate $u^n_i$. This is the same as the Taylor series expansion for the linear advection equation. The third term in the final line is the same as the third term in the first line, with the only difference being that we have used the improved estimate $\tilde{u}^n_i$ instead of the original estimate $u^n_i$. This is the same as the Taylor series expansion for the linear advection equation. Therefore, we have a fourth-order accurate approximation for the linear advection equation. ![[DALL·E 2022-07-29 21.39.12 - Paradoxal symmetry in the monkey universe by Picasso.png]] It is worth noting that the improved estimate $\tilde{u}^n_i$ is the same as the estimate $u^{n+1}_i$ after the first step. Therefore, the improved estimate $\tilde{u}^n_i$ is a fourth-order accurate approximation for the linear advection equation. This means that we can use the improved estimate $\tilde{u}^n_i$ as the initial condition for the next time step, rather than using the original estimate $u^n_i$. We can then use the original estimate $u^n_i$ as the initial condition for the time step after the next time step. We can repeat this process, alternating between using the improved estimate $\tilde{u}^n_i$ and the original estimate $u^n_i$ as the initial condition for the next time step. This is the idea behind the leapfrog method. ![[DALL·E 2022-07-29 21.40.39.png]] ![[DALL·E 2022-08-06 21.12.36 - Love within a mathematical paradox, painting by Escher.png]] ![[DALL·E 2022-07-27 21.56.15.png]] ![[DALL·E 2022-07-27 21.54.14 - A mathematical paradox in the space station, painting by Escher.png]] ![[DALL·E 2022-07-26 22.05.35.png]] ![[DALL·E 2022-07-26 22.04.20 - An infinite mirror reflecting brains, painting by Escher.png]] ![[DALL·E 2022-07-26 21.56.20 - An infinity mirror, painting by Escher.png]] ![[DALL·E 2022-07-26 21.53.37 - An infinite camera portal, painting by Escher.png]] ![[Pasted image 20220326203126.png]] ![[TimeToDisco(0)_0.png]] ![[TimeToDisco(0)_1.png]] ![[TimeToDisco(1)_0 (1).png]] ![[TimeToDisco(1)_0.png]] ![[TimeToDisco(1)_1.png]] ![[TimeToDisco(1)_2.png]] ![[TimeToDisco(1)_3 (1).png]] ![[TimeToDisco(1)_3.png]] ![[TimeToDisco(2)_0.png]] ![[TimeToDisco(2)_1.png]] ![[TimeToDisco(2)_2.png]] ![[TimeToDisco(3)_0.png]] ![[TimeToDisco(3)_1 (1).png]] ![[TimeToDisco(3)_1.png]] ![[TimeToDisco(3)_2.png]] ![[TimeToDisco(3)_3.png]]