Upcoming Events
Applied & Computational Mathematics seminar: Computational strategies for state constraints in PDE-constrained optimization under uncertainty
Feb 9, 2024, 10:00 - 11:00 AM
Speaker: Thomas Surowiec, Simula Research Laboratory
Title: Computational strategies for state constraints in PDE-constrained optimization under uncertainty
Abstract: Optimization under uncertainty with feasible sets governed by random partial differential equations (PDEs) have been the focus of many studies over the past ten years. The modeling framework offers more flexibility than traditional PDE-constrained optimization by allowing model parameters to vary over a range of values. This is especially useful when the parameters have been estimated from data using statistical procedures. Recent work has even shown how risk-averse PDE-constrained optimization can be used in the context of digital twins for identifying weakness in structural engineering.
From a computational perspective, we pay a significant price by needing to treat the resulting states, i.e. solutions of the PDEs, as random fields. The implicit uncertainty propagates through the rest of the problem wherever the state variables appear. This has primarily been restricted to the objective function as, until recently, the vast majority of studies have avoided the issue of state constraints. However, state constraints play an important role in optimal control and optimization as they force us to search for optimal decisions that avoid exceeding potentially catastrophic thresholds on temperatures, displacements, budgets, etc.
We motivate and discuss the central problem of state constraints in PDE-constrained optimization under uncertainty with the help of basic model problems, as the difficulties obviously persist in more complex settings. To this aim, we discuss and compare several possibilities: a quadratic penalty approach, probability constraints, and relaxed constraints. Though the quadratic penalty method allows a direct application of a semismooth Newton method after approximation of the underlying probability measure, the need for parameter updates and increasing sample sizes gives rise to a number of open questions on adaptivity. We contrast these results with an online stochastic approximation method that uses a (degenerate) reformulation of the state constraint as a scalar expectation constraint. We analyze this algorithm in the continuous setting under periodic restarts, which yields probabilistic convergence rates for optimal values and feasibility. As part of the numerical results, we provide a post-optimization analysis via a Kolmogorov-Smirnov test of the distributions of the random objective and constraint functional.
Time: Friday, February 9, 2024 - 10:00am-11:00am
Place: Exploratory Hall, room 4106 and Zoom