Materials: Models and Simulation

Molecular dynamics, stochastic models, optics

Computational modelling and optimal control of interacting particle systems: connecting dynamic density functional theory and PDE-constrained optimization

Supervisor(s): Ben Goddard, John Pearson

From animal flocking, the spread of diseases, and formation of opinions to nano-filtration, brewing, and printing – there are plenty of processes in nature, society, and industry that can be thought of as systems of interacting particles. Considering flocking of birds, we can (mathematically) think of each bird as a particle that interacts with the other birds (particles) around them in the sky. Their actions are influenced by the wind current, each others’ signals, and environmental factors, such as climate conditions or predators. In an industrial setting, yeast particles are suspended in the beer during the brewing process. These yeast particles interact by clumping together, while being affected by temperature and by gravity, so that they sediment to the bottom of the brewing vessel over time.

We can build mathematical models of such interacting particle systems and use them to describe all the processes mentioned above. Such models are called partial differential equation (PDE) models, and they capture how the particles move under the influence of diffusion and different forces, such as gravity and currents (as in the birds example). Moreover, such models can describe how particles interact with each other, for example whether they form clusters (as the yeast does in brewing). Once such a PDE model is found, we can ask further questions: How can the sedimentation of yeast be improved, so that the brewing process is sped up? How can we support birds to find the best route for migration? Mathematically, these are questions about optimization, or more specifically, about optimal control. The key question in optimal control is: What is the optimal problem setup (the aspect we can control) that will result in a situation close to some desired outcome at a minimal investment of energy? In the case of brewing we may add more clumping agents (control) to the brewing vessel to cause the formation of larger, heavier clumps of yeast that sediment quicker (the desired outcome).

This thesis is concerned with such questions, i.e., the optimal control of the PDEs that describe particle dynamics. Since the models, as well as the optimization problems, are not solvable analytically, a key topic of this work is the development of fast and accurate computational solvers for such problems. Moreover, several extensions of the method are introduced. Since most real-world examples are not confined to a box-like (i.e., easy to compute) shape, one extension is to adapt the numerical methods so that problems can be computed on more complicated shapes, such as a brewing vessel. Moreover, depending on the application of interest, more physical effects have to be included into the PDE model. First, we extend the model to describe different kinds of particles and how they may interact. For example, we could describe how seagulls and pigeons interact with each other, as well as among themselves, in the same model. In the original model, we assume each particle to be soft, so that we could squeeze many particles into each other to fit into a small box. In a second extension of the model, each particle is like a marble, so that fewer particles fit in the same box, since they have a hard surface and cannot overlap. The methods developed in this thesis can be applied to the modelling and optimization of various real-world processes.

Link to online thesis

Analysis and applications of dynamic density functional theory

Supervisor(s): Ben Goddard

A common picture of liquids and gases is that they are comprised of molecules. Then, the molecules are made up of atoms. Further, the atoms are made up of electrons, protons, and neutrons. Most of the time, we do not need to consider this granular nature of matter, and, to all intents and purposes, everyday liquids and gases are ‘continuous’. However there are many natural phenomena which cannot be accurately described by continuous thinking alone, such as: the exact profile of wine at the liquid-glass-air interface, the shape of freezing raindrops and the structure of liquid crystals.

Equally, there are interesting multiscale phenomena in nature, in which discrete elements interact with one another, and, when squinting one’s eyes, appear to behave continuously, such as: the flocking of birds, the effervescent paths of ant armies over forest floors, and the dynamics of colloidal fluids. In modelling any of these situations we can see that there is an advantage to considering the systems as ‘continuous’. Naturally, if the individual particles in the systems are so numerous, keeping track of all of them means one would need a lot of computer power. Furthermore, to observe very fast changes in the dynamics of such systems requires very fine snapshots in time, leading to slow computer simulations, given the speed limits of current processors. On the other hand, if one were to average over the behaviour of the individuals, both temporally by considering longer snapshots, and spatially by coarsening over the individuals’ position and velocity data, it is conceivable that we might throw away the particular information which gave rise to the interesting phenomenon in the first place.

In this thesis we consider a continuous way of thinking of fluids describing multiscale phenomena, without losing the interesting effects.

Link to online thesis

Error in the invariant measure of numerical discretization schemes for canonical sampling of molecular dynamics

Supervisor(s): Ben Leimkuhler

The best description we have for the way the universe behaves at the smallest level is given by quantum dynamics, which describes how packets of energy interact with their surroundings. Though much-celebrated, the equations governing the dances of the atoms are too difficult to solve exactly. However, we can use computer simulation to find approximations to the evolution of a system of particles in time, by assuming their motions are classical (like snooker balls) and by moving the system by successive small jumps forward in time. This procedure is known as molecular dynamics (MD).

We can use MD simulations to answer many questions about a system’s behaviour. For instance, if we wish to know the likelihood of a group of atoms being arranged in a certain way in a random sample (e.g., a knotted state of a biomolecule like DNA), we can perform a long simulation and measure how much time it spends in that particular configuration. If we assume that our simulation is long enough to be representative, then we can work out an approximate probability for the system to be found in that configuration. Additionally, in simulation, we may add a small random force to the equations governing every atom’s movement, approximating the effects of a collection of particles outwith our system of interest, while simultaneously modelling the transfer of heat from the modelled system to its environment. Such a “heat bath” maintains the system temperature, exchanging energy just as if it were immersed in a solvent (such as blood, for biological systems). The challenge then is to design appropriate time-stepping methods which mimic the physical behaviour of the atoms in the presence of the random heat bath.

Many algorithms are available to evolve the MD simulation in time. This thesis is concerned with studying the effect that the choice of algorithm has on the errors introduced in the averages computed from the overall simulation. We draw on previous techniques for studying algorithms in a constant-energy setting (without the random heat bath), to develop a new framework for categorizing methods in the more general stochastic setting. We implement the new methods in state-of-the-art software, and compare them using an MD simulation of a biomolecule.

Link to thesis online