A robust parameterized enhanced shift-splitting preconditioner for three-by-three block saddle point problems
Pinki Khatun
Abstract
Numerical approximation methods serve as a cornerstone in science and engineering, translating complex real-world problems into systems of linear equations. Consequently, the ability to efficiently solve these systems serves as a linchpin of computational science, driving advancements in fields ranging from physics and engineering to machine learning and finance. Linear systems in saddle point form have attracted considerable interest due to their central role in large-scale PDE-constrained optimization and quadratic programming problems, where the underlying problems are often extremely high-dimensional. This makes the development of efficient and robust solution strategies essential. This work introduces preconditioned iterative methods for efficiently solving saddle point problems. We specifically focus on saddle point problems having three-by-three block structures given by: $$ \mathcal{A}{\bf u} = {\bf d}, \qquad \mathcal{A} = \begin{bmatrix} A & B^\top & 0; \\ -B & 0 & -C^\top ;\\ 0 & C & 0 \end{bmatrix}, \quad {\bf u} = \begin{bmatrix} x; \\ y ;\\ z \end{bmatrix}, \quad {\bf d} = \begin{bmatrix} f ;\\ g; \\ h \end{bmatrix},$$ where $A \in \mathbb{R}^{n \times n}$ is symmetric positive definite, $B \in \mathbb{R}^{m \times n}$ and $C \in \mathbb{R}^{p \times m}$ are full row-rank matrices, and $f \in \mathbb{R}^n$, $g \in \mathbb{R}^m$, $h \in \mathbb{R}^p$ are given vectors. These systems are known as three-by-three block saddle point problems (TBSPP). Efficiently solving such systems is challenging due to the ill-conditioning and sensitivity of the coefficient matrix $\mathcal{A}$. Additionally, $\mathcal{A}$ is typically large and sparse, iterative methods are more attractive than direct solvers. Among them, Krylov subspace methods are particularly effective due to their low memory requirements and ease of implementation. However, their performance deteriorates when the system is poorly conditioned, leading to slow convergence. This highlights the importance of designing robust and efficient preconditioners to accelerate the convergence of iterative methods in practical applications. To address these challenges, we propose a novel parameterized enhanced shift-splitting (PESS) iterative scheme along with its associated preconditioner. We further introduce a local variant (LPESS), which relaxes PESS for better efficiency. Both preconditioners are specifically designed to accelerate Krylov subspace methods such as GMRES, enabling faster and more robust convergence even for ill-conditioned TBSPPs. We establish necessary and sufficient conditions for the convergence of the PESS iteration, and provide a theoretical analysis of spectral bounds for both PESS and LPESS preconditioned matrices. Numerous experimental analyses are performed to demonstrate the effectiveness of our proposed PESS and LPESS preconditioners. The key observations are as follows: $(i)$ the proposed PESS and LPESS preconditioners are found to outperform the existing baseline preconditioners in terms of iteration numbers and CPU times. $(ii)$ The proposed preconditioners significantly reduces the condition number of $\mathcal{A}$, consequently showing their proficiency in solving TBSPPs. $(iii)$ The proposed PESS and LPESS preconditioned matrices have better clustered spectral distribution than the baseline preconditioned matrices. $(iv)$ Sensitivity analysis conducted by introducing different percentages of noise on the system showcases the robustness of the proposed PESS preconditioner.
Successful Page Load