## Upcoming Events

## 2023 Celebration of Success

Dec 4, 2023, 2:00 - 3:30 PM

Exploratory Hall Atrium

## Fall 2023 SSB Student Research Day

Dec 5, 2023, 12:00 - 4:00 PM

## STEM Transfer Ask Me Anything

Dec 6, 2023, 4:00 - 5:00 PM

# Applied & Computational Mathematics seminar: Transport Equations, Compositional Sparsity, and Model Classes for High-Dimensional Approximation

Sep 22, 2023, 1:30 - 2:30 PM

**Speaker:** Wolfgang Dahmen, University of South Carolina

**Title: **Transport Equations, Compositional Sparsity, and Model Classes for High-Dimensional Approximation

**Abstract:** The need to recover or approximate functions of many variables is ubiquitous in numerous application contexts such as machine learning, uncertainty quantification, or data assimilation. In all these scenarios the so-called Curse of Dimensionality is an intrinsic obstruction that has been a long standing theme in approximation theory. It roughly expresses an exponential dependence of “recovery cost” on the spatial dimension. It is well-known that being able to avoid the Curse depends on both, the structure of the particular “model class” of functions one wishes to approximate and on the approximation system that is to be used. In this talk we highlight recent results concerning the interplay between these two constituents. For small spatial dimensions approximation complexity is in essence determined by the smoothness of the approximand, e.g. in terms of Sobolev- or Besov-regularity. This is effectively exploited by approximation systems that rely on spatial localization. By contrast, in high-dimensions, more global structural sparsity properties determine the ability to avoid the Curse, unless one imposes excessively high smoothness degrees. Inspired by the highly nonlinear structure of Deep Neural Networks (DNNs), we focus in particular on a new notion of “tamed compositional sparsity” that leads to new types of model classes for high-dimensional approximation. The relevance of such classes is illustrated in the context of solution manifolds of (parameter- dependent) families of partial differential equations (PDEs) and operator learning. Specifically, the framework accommodates “inheritance theorems”: compositional sparsity of problem data (like parameter-dependent coefficient fields) are inherited by solutions. In particular, we focus on transport equations where one cannot rely on dissipative effects. In fact, it is well- known that common model reduction concepts for an effective approximation of corresponding parameter- to-solution maps fail for this type of PDEs. Nevertheless, given compositionally sparse data, corresponding solution manifolds can be shown to belong to compositional approximation classes whose manifold-widths defy the Curse of Dimensionality. Corresponding concrete approximation rates, realized by DNNs, exhibit only a low algebraic dependence on the (large) parametric dimension. We conclude with briefly discussing the bearing of these findings on other problem types and ensuing research directions.

**Time:** Friday, September 22, 2023 - 1:30pm-2:30pm

**Place:** Exploratory Hall, Room 4106