A Note on Parameterized Preconditioned Method for Singular Saddle Point Problems

Show more

1. Introduction

We consider the iterative solution of the following linear system:

(1)

where is Hermitian positive definite, is rank-deficient, i.e., , denotes the conjugate transpose of E, and. Linear systems of the form (1) are called saddle point problems. They arise in many application areas, including computational fluid dynamics, constrained optimization and weighted least-squares problem, see, e.g., [1] [2].

We review the Hermitian and skew-Hermitian splitting (HSS) [3] of coefficient matrix A:

where

,.

The PPHSS Iteration Method ([4]): Denote. Let be an arbitrary initial guess, compute for by the following iteration scheme until converges,

(2)

where, are given positive constants and

(3)

matrix C is Hermitian positive definite.

Evidently, the iteration scheme (2) of PPHSS method can be rewritten as

(4)

here, is the iteration matrix of the PPHSS method. In fact, Equation (4) may also result from the splitting

(5)

with

(6)

Evidently, the matrix can act as a preconditioner for solving the linear system (1), which is called the PPHSS preconditioner. The PPHSS method is a special case of the generalized preconditioned HSS (GHSS) method [5]. When, we can obtain a special case of the PPHSS (SPPHSS) method. In order to analyze the semi-convergence of the PPHSS iteration, we let

where is the identity matrix of order p and. In the same way, we denote

(7)

Owing to the similarity of the matrices and, we only need to study the spectral properties of matrix in order to analyze the semi-convergence of the PPHSS iteration.

2. The Semi-Convergence of the PPHSS Method

As the coefficient matrix A is singular, then the iteration matrix T has eigenvalue 1, and the spectral radius of matrix T cannot be small than 1. For the iteration matrix T of the singular linear systems, we introduce its pseudo-spectral radius by follows,

,

where is the set of eigenvalues of.

For a matrix, the smallest nonnegative integer i such that is called the index of K, and we denote it by. In fact, is the size of the largest Jordan block corresponding to the zero eigenvalue of K.

Lemma 2.1 ([6]). The iterative method (4) is semi-convergent, if and only if,

and.

Lemma 2.2 ([7])., if and only if, for any,.

Theorem 2.3. Assume that B and C be Hermitian positive definite, E be of rank-deficient. Then.

Proof. The proof is similar to the proof of Lemma 2.8 in [8], here is omitted.

Lemma 2.4 ([4]). Let B and C be Hermitian positive definite, E be of rank-deficient. Assume that. Then, we can partition in Equation (7) as

.

Let be the singular value decomposition [9] of E, where and are unitary matrices, and

,

are the singular values of.

Lemma 2.5. The eigenvalues of the iteration matrix of PPHSS iteration method are with multiplicity, and the roots of quadratic equation

, (8)

Proof. Notice the similarity of matrices and. The proof is essentially analogous to the proof of Lemma 2.3 in [4] with only technical modifications. So, it is omitted.

Lemma 2.6. If, then the eigenvalue of the iteration matrix satisfies; if, then or.

Proof. If, we give the proof by contradiction. By Lemma 2.5, obviously, when, it can not be equal to 1. We assume, by some algebra, it can be reduced to

,

here, and. It is equivalent to , so, which is in contradiction with.

If, we have and, which finishes the proof.

Lemma 2.7 ([10]). Both roots of the real quadratic equation are less than one in modulus if and only if and.

Theorem 2.8. If the iteration parameters and

, (9)

then, the pseudo-spectral radius of the PPHSS method satisfies.

Proof. Using condition (9), it follows that. According to Lemma 2.5, if, we can obtain that

,

and

.

By Lemma 2.7, for the eigenvalues of, it holds.

If, by Lemma 2.6, the eigenvalues of, except 1 are. According to the definition of pseudo-spectral, we get.

Theorem 2.9. Let and. Then, the optimal value of the iteration parameter for the SPPHSS iteration method is given by

,

and correspondingly,

. (10)

Proof. According to Lemma 2.5 and Lemma 2.6, we know that the eigenvalues of the iteration matrix are with multiplicity p, and

, (11)

If, the eigenvalues with the form of Equation (11) are 1, which can not affect the value of. Therefore, without loss of generality, here we only need to discuss the case. The rest is similar to that of the proof of Theorem 3.1 in [4], here is omitted.

3. Numerical Results

In this section, we use an example to demonstrate the numerical results of the PPHSS method as a solver by comparing its iteration steps (IT), elapsed CPU time in seconds (CPU) and relative residual error (RES) with other methods. The iteration is terminated once the current iterate satisfies or the number of the prescribed iteration steps are exceeded. All the computations are implemented in MATLAB on a PC computer with Intel (R) Celeron (R) CPU 1000M @ 1.80 GHz, and 2.00 GB memory.

Example 3.1 ([11]). Consider the saddle point problem (1), with the following block form of coefficient matrix:

, ,

where symbol denotes the Kronecker product, and

, , , ,

, , ,

the right-hand side vector b is chosen by, where, ,.

For the Example 3.1, we choose where is the block diagonal matrix of B. In Table 1, it is

clear to see that the pseudo-spectral radius of the PPHSS and the SPPHSS methods are much smaller than of the PHSS method when the optimal parameters are employed. In Table 2, we list numerical results with respect to

Table 1. The optimal iteration parameters and pseudo-spectral radius.

Table 2. IT, CPU and RES for.

IT, CPU and RES of the texting methods with different problem sizes l. We see that the PPHSS and SPPHSS methods with appropriate parameters always outperforms the PHSS method both as a solver and as a preconditioner for GMRES in iteration steps and CPU times. Notice

where. To compute the matrix-vector products with, we make incomplete LU factorization of B and with drop tolerance 0.001. In the two tables, we use restarted GMRES (18) and preconditioned GMRES (18).

NOTES

^{*}Corresponding author. This author is supported by National Natural Science Foundation of China under grant No. 61572018 and Zhejiang Provincial Natural Science Foundation of China under Grant No. LY15A010016.

References

[1] Elman, H.C., Ramage, A. and Silvester, D.J. (2007) Algorithm 866, IFISS, a MatLab Toolbox for Modelling Imcompressible Flow. ACM Trans. Math. Softw., 33, 1-18.

[2] Bjorck, A. (1996) Numerical Methods for Least Squares Problems. SIAM, Philadelphia.
http://dx.doi.org/10.1137/1.9781611971484

[3] Bai, Z.Z., Golub, G.H. and Ng, M.K. (2003) Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Definite Linear Systems. SIAM J. Matrix Anal. Appl., 24, 603-626.
http://dx.doi.org/10.1137/S0895479801395458

[4] Li, X., Yang, A.L. and Wu, Y.J. (2014) Parameterized Preconditioned Hermitian and Skew-Hermitian Splitting Iteration Method for Saddle-Point Problems. Int. J. Comput. Math., 91, 1224-1238.
http://dx.doi.org/10.1080/00207160.2013.829216

[5] Chao, Z. and Zhang, N.M. (2014) A Generalized Preconditioned HSS Method for Singular Saddle Point Problems. Numer. Algorithms, 66, 203-221. http://dx.doi.org/10.1007/s11075-013-9730-y

[6] Berman, A. and Plemmons, R. (1979) Nonnegative Matrices in Mathematical Science. Academic Press, New York.

[7] Zhang, N.M. and Wei, Y.M. (2010) On the Convergence of General Stationary Iterative Methods for Range-Hermitian Singular Linear Systems. Numer. Linear Algebra Appl., 17, 139-154. http://dx.doi.org/10.1002/nla.663

[8] Chen, Y. and Zhang, N.M. (2014) A Note on the Generalization of Parameterized Inexact Uzawa Method for Singular Saddle Point Problems. Appl. Math. Comput., 325, 318-322. http://dx.doi.org/10.1016/j.amc.2014.02.089

[9] Golub, G.H. and Van Loan, C.F. (1996) Matrix Computions. 3rd Edition, The Johns Hopkins University Press, Baltimore.

[10] Young, D.M. (1971) Iterative Solution of Large Linear Systems. Academic Press, New York.

[11] Zheng, B., Bai, Z.Z. and Yang, X. (2009) On Semi-Convergence of Parameterized Uzawa Methods for Singular Saddle Point Problems. Linear Algebra Appl., 431, 808-817. http://dx.doi.org/10.1016/j.laa.2009.03.033