This post is about how to view the world with advection-equation glasses for simulating computational fluid mechanics. I will be discussing how to derive the numerical flux of for a coupled set of linear (purely hyperbolic) partial differential equations by decomposing them into propagating waves, and how to implement a simple finite volume scheme.

# Finite Volume Methods

Start with an arbitrary system of coupled partial differential equations, written in conservative form:

\begin{align} \frac{\partial \textbf{Q}}{\partial t} + \nabla \cdot \textbf{F} = 0 \end{align}Where \(\textbf{Q}\) is a vector of the conserved variables, and \(\textbf{F}\) describes the motion of \(\textbf{Q}\) in space, i.e. the flux vector. For example, take a simple system of 2 equations with 2 conserved variables \(u\) and \(v\) in 1D:

\begin{align} \frac{\partial u}{\partial t} + \frac{\partial (a v)}{\partial x} &= 0 \label{sys1}\\ \frac{\partial v}{\partial t} + \frac{\partial (a u)}{\partial x} &= 0 \label{sys2} \end{align}In this system, \(\mathbf{Q} = \begin{bmatrix}u\\v\end{bmatrix}\) and \(\mathbf{F} = \begin{bmatrix}av\\au\end{bmatrix}\)

Now if I integrated this system over some volume in the domain, I would get:

\begin{align} V_i \frac{\partial \tilde{\mathbf{Q}}}{\partial t} + \int_{V} \nabla \cdot \mathbf{F} dV = 0 \end{align}Where \(\tilde{\mathbf{Q}}\) is the average amount of \(\mathbf{Q}\) in that volume. Now from the divergence theorem, I can re-write the second integral as a surface integral:

\begin{align} V_i \frac{\partial \tilde{\mathbf{Q}}_i}{\partial t} + \int_{A} \hat{n} \cdot \mathbf{F}_i dA = 0 \label{weak_form} \end{align}This surfact integral describes how much of \(\tilde{\mathbf{Q}}\) is moving into or out of our volume. However, this gets tricky to evaluate exactly, so we are going to play some tricks to simplify this. These tricks are part of Gudonov's scheme.

# Gudonov's Scheme

Let's consider just a 1D problem (the one given in equations \ref{sys1} and \ref{sys2}). By applying the chain rule, we can separate this into some flux Jacobian matrix \(\stackrel{\leftrightarrow}{J}\) and a spatial derivative of \(\mathbf{Q}\): \begin{align} \frac{\partial \mathbf{F}}{\partial x} &= \frac{\partial \mathbf{F}}{\partial \mathbf{Q}} \frac{\partial \mathbf{Q}}{\partial x} = \stackrel{\leftrightarrow}{J} \frac{\partial \mathbf{Q}}{\partial x}\\ \stackrel{\leftrightarrow}{J} &= \begin{bmatrix} \frac{\partial f_1}{\partial q_1} & \frac{\partial f_1}{\partial q_2} & \ldots & \frac{\partial f_1}{\partial q_n}\\ \frac{\partial f_2}{\partial q_1} & \ddots & \ldots & \frac{\partial f_2}{\partial q_n}\\ \vdots & \ddots & \ddots & \vdots\\ \frac{\partial f_n}{\partial q_1} & \ldots & \ldots & \frac{\partial f_n}{\partial q_n} \end{bmatrix} \end{align}For the above example system,

\begin{align} \stackrel{\leftrightarrow}{J} &= \begin{bmatrix}0 & a\\a & 0\end{bmatrix} \end{align}It would be really convenient if \(\stackrel{\leftrightarrow}{J}\) was diagonal... However, we can achieve this if we decide we're not so attached to the variables in \(\mathbf{Q}\). Start by performing an Eigenvalue decomposition of the flux Jacobian \(\stackrel{\leftrightarrow}{J} = \stackrel{\leftrightarrow}{X} \stackrel{\leftrightarrow}{\Lambda} \stackrel{\leftrightarrow}{X}^{-1}\). For the example system,

\begin{align} \stackrel{\leftrightarrow}{\Lambda} &= \begin{bmatrix} \lambda_1 & 0\\ 0 & \lambda_2 \end{bmatrix} = \begin{bmatrix} a & 0\\0 & -a \end{bmatrix}\\ \stackrel{\leftrightarrow}{X} &= \begin{bmatrix} 1 & 1\\ 1 & -1 \end{bmatrix}\\ \stackrel{\leftrightarrow}{X}^{-1} &= \begin{bmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} & \frac{-1}{2} \end{bmatrix} \end{align}Multiplying the entire system by \(\stackrel{\leftrightarrow}{X}^{-1}\),

\begin{align} \stackrel{\leftrightarrow}{X}^{-1} \frac{\partial \mathbf{Q}}{\partial t} + \stackrel{\leftrightarrow}{X}^{-1} \stackrel{\leftrightarrow}{X} \stackrel{\leftrightarrow}{\Lambda} \stackrel{\leftrightarrow}{X}^{-1} \frac{\partial \mathbf{Q}}{\partial x} &= 0\\ \stackrel{\leftrightarrow}{X}^{-1} \frac{\partial \mathbf{Q}}{\partial t} + \stackrel{\leftrightarrow}{\Lambda} \stackrel{\leftrightarrow}{X}^{-1} \frac{\partial \mathbf{Q}}{\partial x} &= 0 \end{align}Notice that \(\stackrel{\leftrightarrow}{X}^{-1}\) is constant in time and space, so we are free to move it inside or out of the derivative. Defining \(\mathbf{W} = \stackrel{\leftrightarrow}{X}^{-1} \mathbf{Q}\),

\begin{align} \frac{\partial \mathbf{W}}{\partial t} + \stackrel{\leftrightarrow}{\Lambda} \frac{\partial \mathbf{W}}{\partial x} &= 0 \end{align}Hey look, these are really just 2 decoupled advection equations!

\begin{align} \frac{\partial w_1}{\partial t} + \lambda_1 \frac{\partial w_1}{\partial x} &= 0\\ \frac{\partial w_2}{\partial t} + \lambda_2 \frac{\partial w_2}{\partial x} &= 0 \end{align}I know the exact solution to this, the flux depends only on the sign of \(\lambda\). Writing this in terms of the value to the left and right of an interface,

\begin{align} \mathbf{G}^{*} = \frac{\stackrel{\leftrightarrow}{\Lambda} - \stackrel{\leftrightarrow}{|\Lambda|}}{2} \mathbf{W}_R + \frac{\stackrel{\leftrightarrow}{\Lambda} + \stackrel{\leftrightarrow}{|\Lambda|}}{2} \mathbf{W}_L \end{align}Combining with the result in equation \ref{weak_form} and exactly evaluating the surface integral,

\begin{align} \Delta x_i \frac{\partial \tilde{\mathbf{W}}_i}{\partial t} = -\left(\tilde{\mathbf{G}}_{i+1/2} - \tilde{\mathbf{G}}_{i-1/2}\right) \end{align}This is now just an ODE, and we can apply any numerical ODE integration algorithm we want. For simplicity, I'm going to use the Forward Euler method. So the value at time \(t^{(m+1)}\) is:

\begin{align} \tilde{\mathbf{W}}_i^{(m+1)} = \tilde{\mathbf{W}}_i^{(m)} - \frac{\Delta t}{\Delta x_i} \left(\tilde{\mathbf{G}}_{i+1/2}^{(m)} - \tilde{\mathbf{G}}_{i-1/2}^{(m)}\right) \end{align}If we assume \(\tilde{\mathbf{W}}_i\) is constant in a cell,

\begin{align} \tilde{\mathbf{W}}_i^{(m+1)} = \tilde{\mathbf{W}}_i^{(m)} - \frac{a \Delta t}{\Delta x_i} \left( \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix} \tilde{\mathbf{W}}_i^{(m)} - \begin{bmatrix}0 & 0\\0 & 1\end{bmatrix} \tilde{\mathbf{W}}_{i+1}^{(m)} - \begin{bmatrix}1 & 0\\0 & 0\end{bmatrix} \tilde{\mathbf{W}}_{i-1}^{(m)} \right) \end{align}It is entirely possible to implement a solver which evolves \(\mathbf{W}\) and then converting these to \(\mathbf{Q}\) at the end. However, we could also just invert now.

\begin{align} \tilde{\mathbf{Q}}_i^{(m+1)} = \tilde{\mathbf{Q}}_i^{(m)} - \frac{a \Delta t}{\Delta x_i} \left( \tilde{\mathbf{Q}}_i^{(m)} - \frac{\tilde{\mathbf{Q}}_{i+1}^{(m)} + \tilde{\mathbf{Q}}_{i-1}^{(m)}}{2} + \begin{bmatrix} \frac{\tilde{v}_{i+1}^{(m)} - \tilde{v}_{i-1}^{(m)}}{2}\\ \frac{\tilde{u}_{i+1}^{(m)} - \tilde{u}_{i-1}^{(m)}}{2}\\ \end{bmatrix} \right) \end{align}# Example Simulation

So now that we have an algorithm, what does it look like? For this, I implemented a simple simulation in Javascript. It implements periodic boundary conditions, and has the given initial conditions:

\begin{align} u(0,x) &= \begin{cases} \sin(2\pi x) & -1 \le x \le 1\\ 0 & \textrm{otherwise} \end{cases}\\ v(0,x) &= \begin{cases} \cos(2\pi x) & -1 \le x \le 1\\ 0 & \textrm{otherwise} \end{cases} \end{align}Legend: Green is u, red is v.

X min | X max | Num Cells | a | ||||
---|---|---|---|---|---|---|---|

Update Steps | Update Delay |

# Conclusion

So that's it! For those who study gas dynamics, they'll recognize this equation is very similar to that for linearized gas dynamics, except I set both wave speeds to be equal. In upcoming posts sometime in the future (can I be anymore vague?) I will be using the scheme developed here to investigate optimization methods on modern architectures, including both CPU (OpenMP) and GPU (OpenCL), as well as some of the pitfalls when it comes to writing parallel codes for embarrassingly parallel problems which are heavily memory bound.

## No comments :

## Post a Comment