Brian D.O. Anderson
(Canberra, Australien)
The Mathematics of Formation Control
Formations of mobile agents, including unmanned airborne vehicles, may often be used to localize objects in the environment, and many control problems arise.
For example, often, such formations should take up a particular shape. What needs to be measured and what needs to be controlled to maintain a prescribed shape? Can control be distributed, i.e. can one arrange for each agent just to observe its neighbors and act appropriately, and yet have the whole formation behave correctly? How can a desired shape be established? What is the effect of noise distorting the measurements? How can an entire formation shape be preserved while the formation translates from A to B?
The resolution of these questions draws ideas from diverse mathematical subfields, including graph theory, dynamical systems, Riemannian manifolds and Morse theory. The lecture will illustrate a number of these problems, particularly the mathematical tools involved in resolving them.
Ragnar-Olaf Buchweitz
(Toronto, Canada)
The McKay Correspondence then and now
In his treatise on “Symmetry”, Hermann Weyl credits Leonardo Da Vinci with the insight that the only finite symmetry groups in the plane are cyclic or dihedral.
Reaching back even farther, had the abstract notion been around, Euclid’s Elements may well have ended with the theorem that only three further groups can occur as finite groups of rotational symmetries in 3-space, namely those of the Platonic solids. Of course, it took another 22 centuries for such a formulation to be possible, put forward by C.Jordan (1877) and F.Klein (1884).
Especially Klein’s investigation of the orbit spaces of those groups and their double covers, the binary polyhedral groups, is at the origin of singularity theory and in the century afterwards many surprising connections with other areas of mathematics such as the theory of simple Lie groups were revealed in work by Grothendieck, Brieskorn, and Slodowy in the 1960’s and 70’s. A beautiful and comprehensive survey of that side of the story was given by G.-M. Greuel in the extended published version of his talk at the Centennial Meeting of the DMV in 1990 in Bremen.
It came then as a complete surprise when J.McKay pointed out in 1979 a very direct, though then mysterious relationship between the geometry of the resolution of singularities of these orbit spaces and the representation theory of the finite groups one starts from. In particular, he found a remarkably simple explanation for the occurrence of the Coxeter-Dynkin diagrams in the theory.
This marks essentially the beginning of “Noncommutative Singularity Theory”, the use of representation theory of not necessarily commutative algebras to understand the geometry of singularities, a subject area that has exploded during the last decade in particular because of its role in the mathematical formulation of String Theory in Physics.
In this talk I will survey the beautiful classical mathematics at the origin of this story and then give a sampling of recent results and of work still to be done.
Omar Ghattas
(Austin, USA)
Uncertainty quantification with applications to large-scale models: from data to inference to optimization
In recent years there has been increasing recognition of the critical role of uncertainty quantification (UQ) in all phases of computational science & engineering (CSE). The need to account for uncertainties is becoming urgent as the fidelity of CSE models grows and they become accepted surrogates for reality, used increasingly for decision making about critical technological and societal systems.
The need to quantify uncertainties arises in the three fundamental tasks of CSE: (1) The inverse problem: given a model, (possibly noisy) observational data, and any prior knowledge of model parameters, infer unknown parameters and their uncertainties by solving a statistical inverse problem. (2) The prediction (or forward) problem: once model parameters and uncertainties have been estimated from data, propagate the resulting probability distributions through the model to yield predictions of quantities of interest with quantified uncertainties. (3) The optimization problem: given an objective function representing stochastic predictions of quantities of interest and decision variables (design or control) that can be manipulated to influence the objective, solve the optimal control or design problem governed by the stochastic forward problem to produce optimal values of these variables.
Fundamentally, what ties these three problems classes together is the need to explore parameter space, where each point in parameter space entails a large scale forward model solve. Unfortunately, the use of conventional Monte Carlo methods to explore parameter space is prohibitive, especially when large complex models (e.g., PDEs) and high-dimensional stochastic parameter spaces (e.g., representing discretized fields) are involved.
We believe the key to overcoming the severe challenges entailed by UQ for complex models is to recognize that beneath the apparently high-dimensional inversion, prediction, and optimization problems lurk much lower dimensional manifolds that capture the maps from uncertain parameters to outputs of interest (observables, prediction quantities, optimization functionals). Thus, these problems are characterized by their much smaller intrinsic dimensions. Here we introduce a framework for exploiting this problem structure that is based on local Taylor approximations of the parameter-to-output maps, in combination with randomized linear algebra techniques. In examples involving geophysical models (Antarctic ice sheet flow, subsurface flow), we demonstrate that the cost of solving the forward, inverse, and optimization problems—when measured in PDE model solves—is independent of the problem dimension.
This work is joint with Alen Alexanderian (NCSU), Tobin Isaac (Chicago), Noemi Petra (UC Merced), and Georg Stadler (NYU).
Christian J. Kähler
(München)
Large-scale structures in turbulent boundary layers
Turbulent boundary layers are still the subject of intense scientific investigations and with the increasing strength of numerical and experimental techniques, more features of the turbulent motion can be resolved that enhance our picture of near wall turbulence. Due to the enormous progress in optical field measurement techniques (PIV / PTV), today it is possible to non-intrusively sample the flow, even at high Reynolds numbers, with micron resolution in all three spatial directions. This allows for investigations of the scaling of statistical flow quantities and near wall flow effects, such as the small reverse flow events in the near wall region recently predicted in numerical flow simulations. Moreover, it is possible to record turbulent boundary layer features over a large domain in order to study the coherent large scale flow features, such as turbulent superstructures and their scaling. Within this presentation, the large and small scale features of various high Reynolds number turbulent boundary layer flows will be discussed and the link or interaction between the coherent structures will be illuminated.
Britta Nestler
(Karlsruhe)
Pattern formation studies by large-scale phase-field simulations
Phase-field modelling has become a fairly versatile technique for the treatment of microstructure formation and phase transition problems. It usually operates on a mesoscopic length-scale exploring microstructural characteristics at a micrometer resolution. With this scope, the method serves as the bonding chain between atomistic and macroscopic simulation schemes. Herewith, the phase-field method plays a central role in multiscale materials modelling and naturally desires the exploitation of large representative volume elements. Furthermore, the combination of phase-field modelling with multiphysics applications such as heat and mass transfer, continuum mechanics, fluid flow, micromagnetism and electrochemistry has been achieved. Incorporating multiscale and multiphysics, phase-field modelling is central for the future technology ,,Integrated Computational Materials Engineering (ICME)” as it allows for a medium of information transfer between both, experimentalists and modellers as well as between different materials modelling methods. In the overview talk, we present a novel formulation of a general phase-field model for multicomponent material systems based on a grand potential formalism and discuss techniques to efficiently transfer thermodynamic databases to provide direct access to the Gibbs free energies of the different phases. Alloy systems with three or more chemical species can form a broad variety of different microstructures depending on physical parameters and processing conditions. To investigate the diversity of pattern formations, a full three dimensional modelling is mandatory and requires intense computational power. We choose a ternary eutectic alloy to demonstrate the power of high performance computations for describing the physical mechanisms of experimentally observed phase ordering during solidification and to derive morphology transition diagrams. Using advanced data analysis tools and principal component algorithms, we illustrate the necessity of massively parallel high performance computing techniques to resolve the microstructures in sufficiently large representative volume elements. As another example of large-scale 3D computations, we apply the phase-field method to study wetting phenomena of immiscible and compound droplets on flat, porous and chemically structured substrates. By combining the phase-field method with elasto-plastic models, we show a detailed view into the stress-strain evolution and crack propagation in polycrystalline grain structures.
Ilaria Perugia
(Wien)
Non standard finite elements for wave problems
Over the last years, finite element methods based on operator-adapted approximating spaces have been developed in order to better reproduce physical properties of the analytical solutions, and to enhance stability and approximation properties. They are based on incorporating a priori knowledge about the problem into the local approximating spaces, by using trial and/or test spaces locally spanned by functions belonging to the kernel of the differential operator (Trefftz spaces). These methods are particularly popular for wave problems in frequency domain. Here, the use of oscillating basis functions allows to improve the accuracy vs. computational cost, with respect to standard polynomial finite element methods, and breaks the strong requirements on number of degrees of freedom per wavelength to ensure stability.
In this talk, the basic principles of Trefftz finite element methods for time-harmonic wave problems will be presented. Trefftz methods differ from each other by the way interelement continuity conditions are imposed. We will focus on discontinuous Galerkin approaches, where the approximating spaces are made of completely discontinuous Trefftz spaces, and on the recent virtual element framework, which allows for the construction of Trefftz-enriched continuous spaces on general polytopic meshes. The application of the Trefftz paradigm to space-time approximations of wave problems in time domain will also be discussed.
Ben Schweizer
(Dortmund)
Resonance phenomena of small objects and the construction of meta-materials with astonishing properties
We know resonance effects from daily life: In a classical instrument, vibrations of some part of the instrument are amplified by resonance in the sound body. Typically, the resonator has a size that is related to the frequency: the larger the instrument, the lower the tone. In this talk we discuss resonators for light and sound waves that are small in size, much smaller than the wave-length. The assembly of many small resonators can act as a meta-material with astonishing properties: As a sound absorber or as a material with negative
index. Our first example are small Helmholtz resonators, we investigate their frequency and the behavior of the corresponding meta-material. The second example are split-ring resonators for Maxwell’s equations and the negative refraction of light. We conclude with some comments on negative index cloaks: These resonators lead to the invisibility of small objects in their vicinity.
Christoph Woernle
(Rostock)
HiL simulation for testing human joint endoprostheses
Instabilities of artificial joints are prevalent complications in total joint arthroplasty. In order to investigate failure mechanisms like dislocation of total hip replacements or instability of total knee replacements a novel test approach is introduced by means of a hardware-in-the-loop (HiL) simulation combining the advantages of an experimental with a numerical approach. The HiL simulation is based on a six-axis industrial robot and a musculoskeletal multibody model. Within the multibody model, the anatomical environment of the correspondent joint is represented such that the soft tissue response is considered during an instability event. Hence, the robot loads and moves the real implant components according to the data provided by the multibody model while transferring back the relative displacement of the implant components and the resisting moments recorded. HiL simulations provide a new biomechanical testing tool which enables comparable and reproducible investigations of various joint replacement systems with respect to their instability behaviour under realistic movements and reproducible physiological load conditions.
The HiL test system for total hip replacements was validated against in vivo data derived from patients with instrumented implants showing that the system is able to reproduce comparable joint dynamics as present in vivo. The impact of certain test conditions, such as joint lubrication, implant position, load level in terms of body mass and removal of muscle structures, was evaluated within several HiL simulations.
HiL testing of total knee replacements is more complex as the kinematic and dynamic behavior of the knee joint depends to a great extent on the elastic properties of the ligament system to be described in the simulation model. For this purpose a method to identify the elastic behavior of individual ligaments from their contribution to the overall spatial stiffness of the knee joint is under development. A robot under hybrid position/force control moves and loads a human cadaveric knee in different stages of ligament resection.
Emmy-Noether-Lecture:
Gerlind Plonka-Hoch
(Göttingen)
Prony’s Method: Parameter identification and sparse approximation
In signal analysis, we often have some a priori knowledge about the underlying structure of the wanted signal that we need to exploit suitably. Using this structure, we are faced with the problem of determining a certain number of parameters from the given signal measurements. Considering for example a structured function of the form
f(ω) = ∑j=1M cj exp(ω Tj)
with (unknown) complex parameters cj and Tj, j=1, …, M, and assuming that -π < Im Tj < π, we aim to reconstruct cj and Tj from a given small amount of (possibly noisy) measurement values f(l).
In recent years, the Prony method has been successfully applied to different inverse problems as e.g. for approximation of Green functions in quantum chemistry or fluid dynamics, for localization of particles in inverse scattering, for parameter estimation of dispersion curves of guided waves, and for analysis of ultrasonic signals. The renaissance of Prony’s method originates from some modifications of the algorithm described above that considerably stabilize the original approach, as e.g. the ESPRIT method, the matrix pencil method or the approximate Prony method.
In this talk, we give a review on Prony’s method and its relations to system identification, to the annihilating filter method in signal analysis, to sparse interpolation in computer algebra, to rational approximation and to the recovery of signals with finite rate of innovation. We will introduce a new general approach for the reconstruction of sparse expansions of eigenfunctions of suitable linear operators. This new insight provides us with a tool to unify all Prony-like methods on the one hand and to essentially generalize the Prony approach on the other hand. In particular, we will show that all well-known Prony-like reconstruction methods for exponentials and polynomials known so far, can be seen as special cases of this approach. Moreover, the new insight into Prony-like methods enables us to derive new reconstruction algorithms for expansions of orthogonal polynomials. Moreover, we can derive a deterministic reconstruction method for $M$-sparse vectors that can be used to construct fast sparse Fourier transform algorithms. The results presented here have been obtain in collaboration with Thomas Peter, Manfred Tasche, Marius Wischerhoff and Katrin Wannenwetsch.