Plenary Lectures 2017

Plenary Lectures – Mathematics

1. Ulrich Rüde
(Erlangen-Nürnberg, Germany)

All modern computer architectures are parallel, not only supercomputers, but even works stations and smart phones. For technological and physical reasons the clock rate of processors cannot be increased further so that only the number of transistors on each chip keeps increasing. In consequence, parallel algorithms are required for all state-of-the-art computing. This is particularly true for demanding coupled systems in science and engineering. We will present two examples. The Earth Mantle is a spherical shell with a volume of a trillion (1012) cubic kilometers. Mantle convection can be modeled by an indefinite finite element problem. We will demonstrate that matrix free parallel multigrid methods can solve such systems with currently up to 1013 (ten trillion) degrees of freedom on almost half a million processor cores in compute times of a few minutes. A second example will be motivated by 3D printing as a modern additive manufacturing technology. We will show how the process can be simulated by a complex combination of rigid granular dynamics and lattice Boltzmann methods. The scenario involves the generation of a powder bed, controlled energy transfer with an electron beam, melting of the particles, flow of the molten metal, and solidification. With supercomputers, a direct numerical simulation is possible, i.e. representing the physics with full geometric and temporal resolution of each particle and modeling the melt flow subject to surface tension and contact angle conditions.

2. László Székelyhidi 
(Leipzig, Germany)

It is known since the pioneering work of Scheffer and Shnirelman in the 1990s that weak solutions of the incompressible Euler equations behave very differently from classical solutions, in a way that is very difficult to interpret from a physical point of view. Nevertheless, weak solutions in three space dimensions have been studied in connection with a long-standing conjecture of Lars Onsager from 1949 concerning anomalous dissipation and, more generally, because of their possible relevance to Kolmogorov’s K41 theory of turbulence.
In joint work with Camillo De Lellis we established a connection between the theory of weak solutions of the Euler equations and the Nash-Kuiper theorem on rough isometric immersions. Through this connection we can interpret the wild behaviour of weak solutions of the Euler equations as an instance of Gromov’s celebrated h-principle.
In this lecture I will explain this connection and outline the most recent progress concerning Onsager’s conjecture.

3. Oliver Ernst 
(Chemnitz, Germany)

The dynamically growing scientific discipline of Uncertainty Quantification (UQ) addresses the numerous sources of uncertainty in complex simulations of scientific and engineering phenomena in order to assess the validity, reliability and robustness of the results of such simulations. In this regard, it represents a key enabling technology for what is now called Predictive Computational Science.

In this talk we focus on two key UQ components. The first, uncertainty propagation, is concerned with solving a random differential equation, by which is meant a differential equation containing uncertain data modeled by a probability law. We highlight recent developments in collocation methods which address the situation where the uncertain data is parameterized by a countably infinite number of random parameters. In the second part of the talk we present numerical methods for performing Bayesian inference on such random data as a systematic way of merging observational data with a given probabilistic model. The challenge here is to efficiently sample from the posterior distribution in an infinite-dimensional state space with cost that is robust with respect to state space resolution and the variance of the observational noise.

4. Jean-Michel Coron
(Paris, France)

A control system is a dynamical system on which one can act by using controls. For these systems a fundamental problem is the stabilization issue: Is it possible to stabilize a given unstable equilibrium by using suitable feedback laws? (Think to the classical experiment of an upturned broomstick on the tip of one’s finger.) On this problem, we present some pioneer devices and works (Ctesibius, Watt, Maxwell, Lyapunov…), some more recent results, and an application to the regulation of the rivers La Sambre and La Meuse in Belgium. Highlights are put on time-varying feedback laws, hyperbolic 1-D hyperbolic systems as well as positive or negative effects of the nonlinearities.

Plenary Lectures – Mechanics

1. Stefan Diebels 
(Saarbrücken, Germany)

The phenomenological approach of continuum mechanics is based on axioms and on experiments. While the balance equations are accepted as physical laws experiments form the basis of constitutive modelling, e.g. the quality of a stress-strain relation at finite deformations strongly depends on the underlying data. On the one hand, experiments leading to homogeneous stress and strain states are typically preferred due to their simple evaluation. On the other hand, multiaxial stress and deformation states are required to match the three dimensional state in a real component. Even if it is still an unsolved task to measure inhomogeneous stress distributions in a specimen, optical strain measurement became a very powerful tool to reconstruct the inhomogeneous displacement field at least on the surface of a specimen. Therefore, the evaluation of inhomogeneous experiments, e.g. true biaxial tests, offers new possibilities in the validation of constitutive equations. In most cases, due to the inhomogeneity and the non-linearity of the resulting boundary value problem, the identification of the material parameters becomes an inverse problem. The present contribution addresses the application of modern experimental techniques for the design of experiments, their realisations and, finally, the parameter identification. Special focus is given to inhomogeneous experiments and to the miniaturisation of experiments.

2. Christian Bucher 
(Wien, Austria)

Structural mechanics since its beginnings has developed very strongly with respect to more accurate modelling and analysis. A key role in this development play numerical methods, especially those based on the concept of finite elements. This high degree of presision, however, is undermined by imprecise and/or incomplete knowledge about the describing parameters of the models and the environmental actions on the structures, such as e.g. wind or earthquakes. Even with a substantially increased effort for experimental evidence, it is frequently not possible to arrive at deterministic descriptions for these parameters. Therefore uncertainty-based (e.g. probabilistic) analysis becomes mandatory. This lecture will highlight some selected points how stochastic structural mechanics can contribute to solve open problems related to the development of structural design procedures.

3. Pierre Suquet 
(LMA, CNRS, Marseille, France)

A common practice in structural problems involving heterogeneous materials with well separated scales, is to use homogenized, or effective, constitutive relations. In linear elasticity the structure of the homogenized constitutive relations is strictly preserved in the change of scales. The linear effective properties can be computed once for all by solving a finite number of unit-cell problems.
Unfortunately there is no exact scale-decoupling in multiscale nonlinear problems which would allow one to solve only a few unit-cell problems and then use them subsequently at  a larger scale. Computational approaches developed to investigate the response of representative volume elements along specific loading paths, do not provide constitutive relations. Most of the huge body of information generated in the course of these costly computations is often lost.
Model reduction techniques, such as the Non Uniform Transformation Field Analysis, may be used to exploit the information generated along such computations and, at the same time, to account for the commonly observed patterning of the local plastic strain field. A new version of the model will be proposed in this talk, with the aim of preserving the underlying variational structure of the constitutive relations, while using approximations which are common in nonlinear homogenization.

4. Daniel Rixen 
(München, Germany)

Many components in modern product cannot be modeled easily or properly: they need to be characterized experimentally. In vibration analysis of complex structural and mechatronical systems, it is highly desirable to build models of assemblies where some components are modeled numerically (e.g. by Finite Elements) whereas others are characterized experimentally. Building substructured models from experimental measurements enables engineers to rapidly detect troublesome parts, identify excitation sources and optimize the dynamics of their design. Such strategies form the basis of so-called Transfer Path Analysis techniques (TPA).
Although the mathematical theory of TPA is rather straightforward, its mechanical interpretation is not intuitive and applying simple ideas to real measurements is a real challenge. If you think that measurements are always the reality … think again: measured dynamic properties of structures (typically Frequency Response Functions) are not only inaccurate, but do often not even satisfy fundamental mechanical properties such as passivity or reciprocity. Such errors render a straightforward application of the theory illusory. 
In the presentation, we will discuss a framework to describe the simple algebra needed in substructuring techniques for structural vibration analysis and we will discuss how the vibration source of active components can be indirectly measured. Also we will outline some important tricks and twists that are needed to use measured components in an assembly. The methods will be illustrated on an industrial example.