Drittmittelprojekte
DFG - Priority Program (SPP 2265) Random Geometric systems
Project: Scaling limits of evolving spanning trees and of random walks on evolving spanning trees
Anita Winter
In this project we study scaling limits of evolving uniform spanning trees (UST) on further classes of networks (including Erdös-Renyi graphs, sequences of densely connected expander graphs and low-dimensional tori). The main motivation comes from modeling large and sparsely connected networks. Trees are the extreme cases of sparsely connected networks. In real world networks, the structure of the network might change over time. One emphasis of the project concerns a particular network dynamics. This is the Aldous-Broder algorithm which is a tree-valued stochastic process that generates the UST. A random walk is a simple stochastic process on a network which allows to explore the structure of the network. In the context of communication networks (e.g.\ internet, wifi) it can be understood as a message sent from device to device. In the current research random walks on dynamic network models are compared with random walks on static networks. In this project we determine the space and time scales on which the random walk has a diffusive scaling limit.
Project: Spatial growth and information exchange in evolving environments and on evolving networks
Anton Klimovsky
What is the effect of the space-time varying environment on the long time behavior of spatially structured populations of interacting particles/individuals/ agents? This question is of high relevance, e.g., in life and social sciences/economics, computer science, artificial intelligence. Mathematically, the project focuses on two phenomenological models of interacting particle systems: (1) branching Brownian motion, which models population growth/ spatial spreading; and (2) voter model, which models information exchange in a population of agents. The novel features that this project introduces into these classical models are: (1) space-time-correlated environments and (2) evolving networks. These play the role of the geographic spaces and substantially change the underlying spatial geometry. The project aims at investigating (1) growth vs. extinction, population size, spread; and (2) clustering vs. consensus of agents, space-time scaling limits of stochastic processes on evolving networks and evolving graph limits.
DFG - Research grant Deep neural networks overcome the curse of dimensionality in the numerical approximation of stochastic control problems and of semilinear Poisson equations
Martin Hutzenthaler (with Thomas Kruse, Universität Gießen)
Partial differential equations (PDEs) are a key tool in the modeling of many real world phenomena. Several PDEs that arise in financial engineering, economics, quantum mechanics or statistical physics are nonlinear, high-dimensional, and cannot be solved explicitly. It is a highly challenging task to provably solve such high-dimensional nonlinear PDEs approximately without suffering from the so-called curse of dimensionality. Deep neural networks (DNNs) and other deep learning-based methods have recently been applied very successfully to a number of computational problems. In particular, simulations indicate that algorithms based on DNNs overcome the curse of dimensionality in the numerical approximation of solutions of certain nonlinear PDEs. For certain linear and nonlinear PDEs this has also been proven mathematically. The key goal of this project is to rigorously prove for the first time that DNNs overcome the curse of dimensionality for a class of nonlinear PDEs arising from stochastic control problems and for a class of semilinear Poisson equations with Dirichlet boundary conditions.
DFG - project Recursive and sparse approximation in reinforcement learning with applications
Denis Belomestny with Ch. Bayer and V. Spokoiny (WIAS)
Reinforcement learning (RL) is an integral part of Machine Learning concerned with controlling a system to maximize a performance measure that expresses a long-term objective. Reinforcement learning is of great interest because of the many practical applications, ranging from problems in artificial intelligence to operations research or financial mathematics. With recent breakthroughs in deep learning, deep reinforcement learning (DRL) demonstrates notable success in solving highly challenging problems. The DRL algorithms are compelling, but it remains an open issue how to relate the architecture of the networks involved to the structure of the underlying dynamic programming algorithm. Moreover, in DRL, the approximate dynamic programming algorithm involves solving a highly nonconvex statistical optimization problem. As an alternative to conventional deep neural network approximations in each backward step, one can construct a more problem-oriented nonlinear type approximation using information from the previous stages of the approximate dynamic programming algorithm. In this project, we aim at developing new types of recursive, sparse, and interpretable approximation methods for RL with provable theoretical guarantees. In particular, our objective is to present a new fitted Q-iteration algorithm with adaptively chosen approximation classes depending on previously constructed approximations. We shall compare this new approach to more conventional DRL algorithms regarding their theoretical guarantees and interpretability. Furthermore, we will extend our methods to the Mean-Field systems by combining our expertise on RL and McKean-Vlasov type processes. As a practical application we will provide an interpretation of the problem of consistent re-calibration of financial models as a RL problem, and study it using the methods developed in this project.