Multi-fidelity deep residual recurrent neural networks for uncertainty quantification
Kani, J. Nagoor
MetadataShow full item record
Effective propagation of uncertainty through a nonlinear dynamical system is an essential task for a number of engineering applications. One viable probabilistic approach to propagate the uncertainty from the high dimensional random inputs to the high-fidelity model outputs is Monte Carlo method. However, Monte Carlo method requires a substantial number of computationally expensive high-fidelity simulations to converge their computed estimations towards the desired statistics. Hence, performing Monte Carlo high-fidelity simulations becomes computationally prohibitive for large-scale realistic problems. Multi-fidelity approaches provide a general framework for combining a hierarchy of computationally cheap low-fidelity models to accelerate the Monte Carlo estimation of the high-fidelity model output. The objective of this thesis is to derive computationally efficient low-fidelity models and an effective multi-fidelity framework to accelerate the Monte Carlo method that uses a single high-fidelity model only. In this thesis, a physics aware recurrent neural network (RNN) called deep residual recurrent neural network (DR-RNN) is developed as an efficient low-fidelity model for nonlinear dynamical systems. The information hidden in the mathematical model representing the nonlinear dynamical system is exploited to construct the DR-RNN architecture. The developed DR-RNN is inspired by the iterative steps of line search methods in finding the residual minimiser of numerically discretized differential equations. More specifically, the stacked layers of the DR-RNN architecture is formulated to act collectively as an iterative scheme. The dynamics of DR-RNN is explicit in time with remarkable convergence and stability properties for a large time step that violates numerical stability condition. Numerical examples demonstrate that DR-RNN can effectively emulate the high-fidelity model of nonlinear physical systems with a significantly lower number of parameters in comparison to standard RNN architectures. Further, DR-RNN is combined with Proper Orthogonal Decomposition (POD) for model reduction of time dependent partial differential equations. The numerical results show the proposed DR-RNN as an explicit and stable reduced order technique. The numerical results also show significant gains in accuracy by increasing the depth of proposed DR-RNN similar to other applications of deep learning. Next, a reduced order modeling technique for subsurface multi-phase flow problems is developed building on the DR-RNN architecture. More specifically, DR-RNN is combined with POD and discrete empirical interpolation method (DEIM) to reduce the computational complexity associated with high-fidelity subsurface multi-phase flow simulations. In the presented formulation, POD is used to construct an optimal set of reduced basis functions and DEIM is employed to evaluate the nonlinear terms independent of the high-fidelity model size. The proposed ROM is demonstrated on two uncertainty quantification test cases involving Monte Carlo simulation of subsurface flow with random permeability field. The obtained results demonstrate that DR-RNN combined with POD-DEIM provides an accurate and stable ROM with a fixed computational budget that is much less than the computational cost of standard POD-Galerkin ROM combined with DEIM for nonlinear dynamical systems. Finally, this thesis focus on developing multi-fidelity framework to estimate the statistics of high-fidelity model outputs of interest. Recently, Multi-Fidelity Monte Carlo (MFMC) method and Multi-Level Monte Carlo (MLMC) method have shown to significantly accelerate the Monte Carlo estimation by making use of low cost low-fidelity models. In this thesis, the features of both the MFMC method and the MLMC method are combined into a single framework called Multi-Fidelity-Multi-Level Monte Carlo (MFML-MC) method. In MFML-MC method, MLMC framework is developed first in which a multi-level hierarchy of POD approximations of high-fidelity outputs are utilized as low-fidelity models. Next, MFMC method is incorporated into the developed MLMC framework in which the MLMC estimator is modified at each level to benefit from a level specific low-fidelity model. Finally, a variant of deep residual recurrent neural network called Model-Free DR-RNN (MF-DR-RNN) is used as a level specific low-fidelity model in the MFML-MC framework. The performance of MFML-MC method is compared to Monte Carlo estimation that uses either a high-fidelity model or a single low-fidelity model on two subsurface flow problems with random permeability field. Numerical results show that MFML-MC method provides an unbiased estimator and show speedups by orders of magnitude compared to Monte Carlo estimation that uses a single high-fidelity model.