LAPACK  3.7.0
LAPACK: Linear Algebra PACKage
cgejsv.f
Go to the documentation of this file.
1 *> \brief \b CGEJSV
2 *
3 * =========== DOCUMENTATION ===========
4 *
5 * Online html documentation available at
6 * http://www.netlib.org/lapack/explore-html/
7 *
8 *> \htmlonly
9 *> Download CGEJSV + dependencies
10 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.tgz?format=tgz&filename=/lapack/lapack_routine/cgejsv.f">
11 *> [TGZ]</a>
12 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.zip?format=zip&filename=/lapack/lapack_routine/cgejsv.f">
13 *> [ZIP]</a>
14 *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.txt?format=txt&filename=/lapack/lapack_routine/cgejsv.f">
15 *> [TXT]</a>
16 *> \endhtmlonly
17 *
18 * Definition:
19 * ===========
20 *
21 * SUBROUTINE CGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
22 * M, N, A, LDA, SVA, U, LDU, V, LDV,
23 * CWORK, LWORK, RWORK, LRWORK, IWORK, INFO )
24 *
25 * .. Scalar Arguments ..
26 * IMPLICIT NONE
27 * INTEGER INFO, LDA, LDU, LDV, LWORK, M, N
28 * ..
29 * .. Array Arguments ..
30 * COMPLEX A( LDA, * ), U( LDU, * ), V( LDV, * ), CWORK( LWORK )
31 * REAL SVA( N ), RWORK( LRWORK )
32 * INTEGER IWORK( * )
33 * CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
34 * ..
35 *
36 *
37 *> \par Purpose:
38 * =============
39 *>
40 *> \verbatim
41 *>
42 *> CGEJSV computes the singular value decomposition (SVD) of a complex M-by-N
43 *> matrix [A], where M >= N. The SVD of [A] is written as
44 *>
45 *> [A] = [U] * [SIGMA] * [V]^*,
46 *>
47 *> where [SIGMA] is an N-by-N (M-by-N) matrix which is zero except for its N
48 *> diagonal elements, [U] is an M-by-N (or M-by-M) unitary matrix, and
49 *> [V] is an N-by-N unitary matrix. The diagonal elements of [SIGMA] are
50 *> the singular values of [A]. The columns of [U] and [V] are the left and
51 *> the right singular vectors of [A], respectively. The matrices [U] and [V]
52 *> are computed and stored in the arrays U and V, respectively. The diagonal
53 *> of [SIGMA] is computed and stored in the array SVA.
54 *> \endverbatim
55 *>
56 *> Arguments:
57 *> ==========
58 *>
59 *> \param[in] JOBA
60 *> \verbatim
61 *> JOBA is CHARACTER*1
62 *> Specifies the level of accuracy:
63 *> = 'C': This option works well (high relative accuracy) if A = B * D,
64 *> with well-conditioned B and arbitrary diagonal matrix D.
65 *> The accuracy cannot be spoiled by COLUMN scaling. The
66 *> accuracy of the computed output depends on the condition of
67 *> B, and the procedure aims at the best theoretical accuracy.
68 *> The relative error max_{i=1:N}|d sigma_i| / sigma_i is
69 *> bounded by f(M,N)*epsilon* cond(B), independent of D.
70 *> The input matrix is preprocessed with the QRF with column
71 *> pivoting. This initial preprocessing and preconditioning by
72 *> a rank revealing QR factorization is common for all values of
73 *> JOBA. Additional actions are specified as follows:
74 *> = 'E': Computation as with 'C' with an additional estimate of the
75 *> condition number of B. It provides a realistic error bound.
76 *> = 'F': If A = D1 * C * D2 with ill-conditioned diagonal scalings
77 *> D1, D2, and well-conditioned matrix C, this option gives
78 *> higher accuracy than the 'C' option. If the structure of the
79 *> input matrix is not known, and relative accuracy is
80 *> desirable, then this option is advisable. The input matrix A
81 *> is preprocessed with QR factorization with FULL (row and
82 *> column) pivoting.
83 *> = 'G' Computation as with 'F' with an additional estimate of the
84 *> condition number of B, where A=B*D. If A has heavily weighted
85 *> rows, then using this condition number gives too pessimistic
86 *> error bound.
87 *> = 'A': Small singular values are not well determined by the data
88 *> and are considered as noisy; the matrix is treated as
89 *> numerically rank defficient. The error in the computed
90 *> singular values is bounded by f(m,n)*epsilon*||A||.
91 *> The computed SVD A = U * S * V^* restores A up to
92 *> f(m,n)*epsilon*||A||.
93 *> This gives the procedure the licence to discard (set to zero)
94 *> all singular values below N*epsilon*||A||.
95 *> = 'R': Similar as in 'A'. Rank revealing property of the initial
96 *> QR factorization is used do reveal (using triangular factor)
97 *> a gap sigma_{r+1} < epsilon * sigma_r in which case the
98 *> numerical RANK is declared to be r. The SVD is computed with
99 *> absolute error bounds, but more accurately than with 'A'.
100 *> \endverbatim
101 *>
102 *> \param[in] JOBU
103 *> \verbatim
104 *> JOBU is CHARACTER*1
105 *> Specifies whether to compute the columns of U:
106 *> = 'U': N columns of U are returned in the array U.
107 *> = 'F': full set of M left sing. vectors is returned in the array U.
108 *> = 'W': U may be used as workspace of length M*N. See the description
109 *> of U.
110 *> = 'N': U is not computed.
111 *> \endverbatim
112 *>
113 *> \param[in] JOBV
114 *> \verbatim
115 *> JOBV is CHARACTER*1
116 *> Specifies whether to compute the matrix V:
117 *> = 'V': N columns of V are returned in the array V; Jacobi rotations
118 *> are not explicitly accumulated.
119 *> = 'J': N columns of V are returned in the array V, but they are
120 *> computed as the product of Jacobi rotations, if JOBT .EQ. 'N'.
121 *> = 'W': V may be used as workspace of length N*N. See the description
122 *> of V.
123 *> = 'N': V is not computed.
124 *> \endverbatim
125 *>
126 *> \param[in] JOBR
127 *> \verbatim
128 *> JOBR is CHARACTER*1
129 *> Specifies the RANGE for the singular values. Issues the licence to
130 *> set to zero small positive singular values if they are outside
131 *> specified range. If A .NE. 0 is scaled so that the largest singular
132 *> value of c*A is around SQRT(BIG), BIG=SLAMCH('O'), then JOBR issues
133 *> the licence to kill columns of A whose norm in c*A is less than
134 *> SQRT(SFMIN) (for JOBR.EQ.'R'), or less than SMALL=SFMIN/EPSLN,
135 *> where SFMIN=SLAMCH('S'), EPSLN=SLAMCH('E').
136 *> = 'N': Do not kill small columns of c*A. This option assumes that
137 *> BLAS and QR factorizations and triangular solvers are
138 *> implemented to work in that range. If the condition of A
139 *> is greater than BIG, use CGESVJ.
140 *> = 'R': RESTRICTED range for sigma(c*A) is [SQRT(SFMIN), SQRT(BIG)]
141 *> (roughly, as described above). This option is recommended.
142 *> ===========================
143 *> For computing the singular values in the FULL range [SFMIN,BIG]
144 *> use CGESVJ.
145 *> \endverbatim
146 *>
147 *> \param[in] JOBT
148 *> \verbatim
149 *> JOBT is CHARACTER*1
150 *> If the matrix is square then the procedure may determine to use
151 *> transposed A if A^* seems to be better with respect to convergence.
152 *> If the matrix is not square, JOBT is ignored.
153 *> The decision is based on two values of entropy over the adjoint
154 *> orbit of A^* * A. See the descriptions of WORK(6) and WORK(7).
155 *> = 'T': transpose if entropy test indicates possibly faster
156 *> convergence of Jacobi process if A^* is taken as input. If A is
157 *> replaced with A^*, then the row pivoting is included automatically.
158 *> = 'N': do not speculate.
159 *> The option 'T' can be used to compute only the singular values, or
160 *> the full SVD (U, SIGMA and V). For only one set of singular vectors
161 *> (U or V), the caller should provide both U and V, as one of the
162 *> matrices is used as workspace if the matrix A is transposed.
163 *> The implementer can easily remove this constraint and make the
164 *> code more complicated. See the descriptions of U and V.
165 *> In general, this option is considered experimental, and 'N'; should
166 *> be preferred. This is subject to changes in the future.
167 *> \endverbatim
168 *>
169 *> \param[in] JOBP
170 *> \verbatim
171 *> JOBP is CHARACTER*1
172 *> Issues the licence to introduce structured perturbations to drown
173 *> denormalized numbers. This licence should be active if the
174 *> denormals are poorly implemented, causing slow computation,
175 *> especially in cases of fast convergence (!). For details see [1,2].
176 *> For the sake of simplicity, this perturbations are included only
177 *> when the full SVD or only the singular values are requested. The
178 *> implementer/user can easily add the perturbation for the cases of
179 *> computing one set of singular vectors.
180 *> = 'P': introduce perturbation
181 *> = 'N': do not perturb
182 *> \endverbatim
183 *>
184 *> \param[in] M
185 *> \verbatim
186 *> M is INTEGER
187 *> The number of rows of the input matrix A. M >= 0.
188 *> \endverbatim
189 *>
190 *> \param[in] N
191 *> \verbatim
192 *> N is INTEGER
193 *> The number of columns of the input matrix A. M >= N >= 0.
194 *> \endverbatim
195 *>
196 *> \param[in,out] A
197 *> \verbatim
198 *> A is COMPLEX array, dimension (LDA,N)
199 *> On entry, the M-by-N matrix A.
200 *> \endverbatim
201 *>
202 *> \param[in] LDA
203 *> \verbatim
204 *> LDA is INTEGER
205 *> The leading dimension of the array A. LDA >= max(1,M).
206 *> \endverbatim
207 *>
208 *> \param[out] SVA
209 *> \verbatim
210 *> SVA is REAL array, dimension (N)
211 *> On exit,
212 *> - For WORK(1)/WORK(2) = ONE: The singular values of A. During the
213 *> computation SVA contains Euclidean column norms of the
214 *> iterated matrices in the array A.
215 *> - For WORK(1) .NE. WORK(2): The singular values of A are
216 *> (WORK(1)/WORK(2)) * SVA(1:N). This factored form is used if
217 *> sigma_max(A) overflows or if small singular values have been
218 *> saved from underflow by scaling the input matrix A.
219 *> - If JOBR='R' then some of the singular values may be returned
220 *> as exact zeros obtained by "set to zero" because they are
221 *> below the numerical rank threshold or are denormalized numbers.
222 *> \endverbatim
223 *>
224 *> \param[out] U
225 *> \verbatim
226 *> U is COMPLEX array, dimension ( LDU, N ) or ( LDU, M )
227 *> If JOBU = 'U', then U contains on exit the M-by-N matrix of
228 *> the left singular vectors.
229 *> If JOBU = 'F', then U contains on exit the M-by-M matrix of
230 *> the left singular vectors, including an ONB
231 *> of the orthogonal complement of the Range(A).
232 *> If JOBU = 'W' .AND. (JOBV.EQ.'V' .AND. JOBT.EQ.'T' .AND. M.EQ.N),
233 *> then U is used as workspace if the procedure
234 *> replaces A with A^*. In that case, [V] is computed
235 *> in U as left singular vectors of A^* and then
236 *> copied back to the V array. This 'W' option is just
237 *> a reminder to the caller that in this case U is
238 *> reserved as workspace of length N*N.
239 *> If JOBU = 'N' U is not referenced, unless JOBT='T'.
240 *> \endverbatim
241 *>
242 *> \param[in] LDU
243 *> \verbatim
244 *> LDU is INTEGER
245 *> The leading dimension of the array U, LDU >= 1.
246 *> IF JOBU = 'U' or 'F' or 'W', then LDU >= M.
247 *> \endverbatim
248 *>
249 *> \param[out] V
250 *> \verbatim
251 *> V is COMPLEX array, dimension ( LDV, N )
252 *> If JOBV = 'V', 'J' then V contains on exit the N-by-N matrix of
253 *> the right singular vectors;
254 *> If JOBV = 'W', AND (JOBU.EQ.'U' AND JOBT.EQ.'T' AND M.EQ.N),
255 *> then V is used as workspace if the pprocedure
256 *> replaces A with A^*. In that case, [U] is computed
257 *> in V as right singular vectors of A^* and then
258 *> copied back to the U array. This 'W' option is just
259 *> a reminder to the caller that in this case V is
260 *> reserved as workspace of length N*N.
261 *> If JOBV = 'N' V is not referenced, unless JOBT='T'.
262 *> \endverbatim
263 *>
264 *> \param[in] LDV
265 *> \verbatim
266 *> LDV is INTEGER
267 *> The leading dimension of the array V, LDV >= 1.
268 *> If JOBV = 'V' or 'J' or 'W', then LDV >= N.
269 *> \endverbatim
270 *>
271 *> \param[out] CWORK
272 *> \verbatim
273 *> CWORK is COMPLEX array, dimension (MAX(2,LWORK))
274 *> If the call to CGEJSV is a workspace query (indicated by LWORK=-1 or
275 *> LRWORK=-1), then on exit CWORK(1) contains the required length of
276 *> CWORK for the job parameters used in the call.
277 *> \endverbatim
278 *>
279 *> \param[in] LWORK
280 *> \verbatim
281 *> LWORK is INTEGER
282 *> Length of CWORK to confirm proper allocation of workspace.
283 *> LWORK depends on the job:
284 *>
285 *> 1. If only SIGMA is needed ( JOBU.EQ.'N', JOBV.EQ.'N' ) and
286 *> 1.1 .. no scaled condition estimate required (JOBA.NE.'E'.AND.JOBA.NE.'G'):
287 *> LWORK >= 2*N+1. This is the minimal requirement.
288 *> ->> For optimal performance (blocked code) the optimal value
289 *> is LWORK >= N + (N+1)*NB. Here NB is the optimal
290 *> block size for CGEQP3 and CGEQRF.
291 *> In general, optimal LWORK is computed as
292 *> LWORK >= max(N+LWORK(CGEQP3),N+LWORK(CGEQRF), LWORK(CGESVJ)).
293 *> 1.2. .. an estimate of the scaled condition number of A is
294 *> required (JOBA='E', or 'G'). In this case, LWORK the minimal
295 *> requirement is LWORK >= N*N + 2*N.
296 *> ->> For optimal performance (blocked code) the optimal value
297 *> is LWORK >= max(N+(N+1)*NB, N*N+2*N)=N**2+2*N.
298 *> In general, the optimal length LWORK is computed as
299 *> LWORK >= max(N+LWORK(CGEQP3),N+LWORK(CGEQRF), LWORK(CGESVJ),
300 *> N*N+LWORK(CPOCON)).
301 *> 2. If SIGMA and the right singular vectors are needed (JOBV.EQ.'V'),
302 *> (JOBU.EQ.'N')
303 *> 2.1 .. no scaled condition estimate requested (JOBE.EQ.'N'):
304 *> -> the minimal requirement is LWORK >= 3*N.
305 *> -> For optimal performance,
306 *> LWORK >= max(N+(N+1)*NB, 2*N+N*NB)=2*N+N*NB,
307 *> where NB is the optimal block size for CGEQP3, CGEQRF, CGELQ,
308 *> CUNMLQ. In general, the optimal length LWORK is computed as
309 *> LWORK >= max(N+LWORK(CGEQP3), N+LWORK(CGESVJ),
310 *> N+LWORK(CGELQF), 2*N+LWORK(CGEQRF), N+LWORK(CUNMLQ)).
311 *> 2.2 .. an estimate of the scaled condition number of A is
312 *> required (JOBA='E', or 'G').
313 *> -> the minimal requirement is LWORK >= 3*N.
314 *> -> For optimal performance,
315 *> LWORK >= max(N+(N+1)*NB, 2*N,2*N+N*NB)=2*N+N*NB,
316 *> where NB is the optimal block size for CGEQP3, CGEQRF, CGELQ,
317 *> CUNMLQ. In general, the optimal length LWORK is computed as
318 *> LWORK >= max(N+LWORK(CGEQP3), LWORK(CPOCON), N+LWORK(CGESVJ),
319 *> N+LWORK(CGELQF), 2*N+LWORK(CGEQRF), N+LWORK(CUNMLQ)).
320 *> 3. If SIGMA and the left singular vectors are needed
321 *> 3.1 .. no scaled condition estimate requested (JOBE.EQ.'N'):
322 *> -> the minimal requirement is LWORK >= 3*N.
323 *> -> For optimal performance:
324 *> if JOBU.EQ.'U' :: LWORK >= max(3*N, N+(N+1)*NB, 2*N+N*NB)=2*N+N*NB,
325 *> where NB is the optimal block size for CGEQP3, CGEQRF, CUNMQR.
326 *> In general, the optimal length LWORK is computed as
327 *> LWORK >= max(N+LWORK(CGEQP3), 2*N+LWORK(CGEQRF), N+LWORK(CUNMQR)).
328 *> 3.2 .. an estimate of the scaled condition number of A is
329 *> required (JOBA='E', or 'G').
330 *> -> the minimal requirement is LWORK >= 3*N.
331 *> -> For optimal performance:
332 *> if JOBU.EQ.'U' :: LWORK >= max(3*N, N+(N+1)*NB, 2*N+N*NB)=2*N+N*NB,
333 *> where NB is the optimal block size for CGEQP3, CGEQRF, CUNMQR.
334 *> In general, the optimal length LWORK is computed as
335 *> LWORK >= max(N+LWORK(CGEQP3),N+LWORK(CPOCON),
336 *> 2*N+LWORK(CGEQRF), N+LWORK(CUNMQR)).
337 *>
338 *> 4. If the full SVD is needed: (JOBU.EQ.'U' or JOBU.EQ.'F') and
339 *> 4.1. if JOBV.EQ.'V'
340 *> the minimal requirement is LWORK >= 5*N+2*N*N.
341 *> 4.2. if JOBV.EQ.'J' the minimal requirement is
342 *> LWORK >= 4*N+N*N.
343 *> In both cases, the allocated CWORK can accommodate blocked runs
344 *> of CGEQP3, CGEQRF, CGELQF, CUNMQR, CUNMLQ.
345 *>
346 *> If the call to CGEJSV is a workspace query (indicated by LWORK=-1 or
347 *> LRWORK=-1), then on exit CWORK(1) contains the optimal and CWORK(2) contains the
348 *> minimal length of CWORK for the job parameters used in the call.
349 *> \endverbatim
350 *>
351 *> \param[out] RWORK
352 *> \verbatim
353 *> RWORK is REAL array, dimension (MAX(7,LWORK))
354 *> On exit,
355 *> RWORK(1) = Determines the scaling factor SCALE = RWORK(2) / RWORK(1)
356 *> such that SCALE*SVA(1:N) are the computed singular values
357 *> of A. (See the description of SVA().)
358 *> RWORK(2) = See the description of RWORK(1).
359 *> RWORK(3) = SCONDA is an estimate for the condition number of
360 *> column equilibrated A. (If JOBA .EQ. 'E' or 'G')
361 *> SCONDA is an estimate of SQRT(||(R^* * R)^(-1)||_1).
362 *> It is computed using SPOCON. It holds
363 *> N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
364 *> where R is the triangular factor from the QRF of A.
365 *> However, if R is truncated and the numerical rank is
366 *> determined to be strictly smaller than N, SCONDA is
367 *> returned as -1, thus indicating that the smallest
368 *> singular values might be lost.
369 *>
370 *> If full SVD is needed, the following two condition numbers are
371 *> useful for the analysis of the algorithm. They are provied for
372 *> a developer/implementer who is familiar with the details of
373 *> the method.
374 *>
375 *> RWORK(4) = an estimate of the scaled condition number of the
376 *> triangular factor in the first QR factorization.
377 *> RWORK(5) = an estimate of the scaled condition number of the
378 *> triangular factor in the second QR factorization.
379 *> The following two parameters are computed if JOBT .EQ. 'T'.
380 *> They are provided for a developer/implementer who is familiar
381 *> with the details of the method.
382 *> RWORK(6) = the entropy of A^* * A :: this is the Shannon entropy
383 *> of diag(A^* * A) / Trace(A^* * A) taken as point in the
384 *> probability simplex.
385 *> RWORK(7) = the entropy of A * A^*. (See the description of RWORK(6).)
386 *> If the call to CGEJSV is a workspace query (indicated by LWORK=-1 or
387 *> LRWORK=-1), then on exit RWORK(1) contains the required length of
388 *> RWORK for the job parameters used in the call.
389 *> \endverbatim
390 *>
391 *> \param[in] LRWORK
392 *> \verbatim
393 *> LRWORK is INTEGER
394 *> Length of RWORK to confirm proper allocation of workspace.
395 *> LRWORK depends on the job:
396 *>
397 *> 1. If only the singular values are requested i.e. if
398 *> LSAME(JOBU,'N') .AND. LSAME(JOBV,'N')
399 *> then:
400 *> 1.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
401 *> then: LRWORK = max( 7, 2 * M ).
402 *> 1.2. Otherwise, LRWORK = max( 7, N ).
403 *> 2. If singular values with the right singular vectors are requested
404 *> i.e. if
405 *> (LSAME(JOBV,'V').OR.LSAME(JOBV,'J')) .AND.
406 *> .NOT.(LSAME(JOBU,'U').OR.LSAME(JOBU,'F'))
407 *> then:
408 *> 2.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
409 *> then LRWORK = max( 7, 2 * M ).
410 *> 2.2. Otherwise, LRWORK = max( 7, N ).
411 *> 3. If singular values with the left singular vectors are requested, i.e. if
412 *> (LSAME(JOBU,'U').OR.LSAME(JOBU,'F')) .AND.
413 *> .NOT.(LSAME(JOBV,'V').OR.LSAME(JOBV,'J'))
414 *> then:
415 *> 3.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
416 *> then LRWORK = max( 7, 2 * M ).
417 *> 3.2. Otherwise, LRWORK = max( 7, N ).
418 *> 4. If singular values with both the left and the right singular vectors
419 *> are requested, i.e. if
420 *> (LSAME(JOBU,'U').OR.LSAME(JOBU,'F')) .AND.
421 *> (LSAME(JOBV,'V').OR.LSAME(JOBV,'J'))
422 *> then:
423 *> 4.1. If LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G'),
424 *> then LRWORK = max( 7, 2 * M ).
425 *> 4.2. Otherwise, LRWORK = max( 7, N ).
426 *>
427 *> If, on entry, LRWORK = -1 ot LWORK=-1, a workspace query is assumed and
428 *> the length of RWORK is returned in RWORK(1).
429 *> \endverbatim
430 *>
431 *> \param[out] IWORK
432 *> \verbatim
433 *> IWORK is INTEGER array, of dimension at least 4, that further depends
434 *> on the job:
435 *>
436 *> 1. If only the singular values are requested then:
437 *> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
438 *> then the length of IWORK is N+M; otherwise the length of IWORK is N.
439 *> 2. If the singular values and the right singular vectors are requested then:
440 *> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
441 *> then the length of IWORK is N+M; otherwise the length of IWORK is N.
442 *> 3. If the singular values and the left singular vectors are requested then:
443 *> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
444 *> then the length of IWORK is N+M; otherwise the length of IWORK is N.
445 *> 4. If the singular values with both the left and the right singular vectors
446 *> are requested, then:
447 *> 4.1. If LSAME(JOBV,'J') the length of IWORK is determined as follows:
448 *> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
449 *> then the length of IWORK is N+M; otherwise the length of IWORK is N.
450 *> 4.2. If LSAME(JOBV,'V') the length of IWORK is determined as follows:
451 *> If ( LSAME(JOBT,'T') .OR. LSAME(JOBA,'F') .OR. LSAME(JOBA,'G') )
452 *> then the length of IWORK is 2*N+M; otherwise the length of IWORK is 2*N.
453 *>
454 *> On exit,
455 *> IWORK(1) = the numerical rank determined after the initial
456 *> QR factorization with pivoting. See the descriptions
457 *> of JOBA and JOBR.
458 *> IWORK(2) = the number of the computed nonzero singular values
459 *> IWORK(3) = if nonzero, a warning message:
460 *> If IWORK(3).EQ.1 then some of the column norms of A
461 *> were denormalized floats. The requested high accuracy
462 *> is not warranted by the data.
463 *> IWORK(4) = 1 or -1. If IWORK(4) .EQ. 1, then the procedure used A^* to
464 *> do the job as specified by the JOB parameters.
465 *> If the call to CGEJSV is a workspace query (indicated by LWORK .EQ. -1 and
466 *> LRWORK .EQ. -1), then on exit IWORK(1) contains the required length of
467 *> IWORK for the job parameters used in the call.
468 *> \endverbatim
469 *>
470 *> \param[out] INFO
471 *> \verbatim
472 *> INFO is INTEGER
473 *> < 0 : if INFO = -i, then the i-th argument had an illegal value.
474 *> = 0 : successful exit;
475 *> > 0 : CGEJSV did not converge in the maximal allowed number
476 *> of sweeps. The computed values may be inaccurate.
477 *> \endverbatim
478 *
479 * Authors:
480 * ========
481 *
482 *> \author Univ. of Tennessee
483 *> \author Univ. of California Berkeley
484 *> \author Univ. of Colorado Denver
485 *> \author NAG Ltd.
486 *
487 *> \date June 2016
488 *
489 *> \ingroup complexGEsing
490 *
491 *> \par Further Details:
492 * =====================
493 *>
494 *> \verbatim
495 *> CGEJSV implements a preconditioned Jacobi SVD algorithm. It uses CGEQP3,
496 *> CGEQRF, and CGELQF as preprocessors and preconditioners. Optionally, an
497 *> additional row pivoting can be used as a preprocessor, which in some
498 *> cases results in much higher accuracy. An example is matrix A with the
499 *> structure A = D1 * C * D2, where D1, D2 are arbitrarily ill-conditioned
500 *> diagonal matrices and C is well-conditioned matrix. In that case, complete
501 *> pivoting in the first QR factorizations provides accuracy dependent on the
502 *> condition number of C, and independent of D1, D2. Such higher accuracy is
503 *> not completely understood theoretically, but it works well in practice.
504 *> Further, if A can be written as A = B*D, with well-conditioned B and some
505 *> diagonal D, then the high accuracy is guaranteed, both theoretically and
506 *> in software, independent of D. For more details see [1], [2].
507 *> The computational range for the singular values can be the full range
508 *> ( UNDERFLOW,OVERFLOW ), provided that the machine arithmetic and the BLAS
509 *> & LAPACK routines called by CGEJSV are implemented to work in that range.
510 *> If that is not the case, then the restriction for safe computation with
511 *> the singular values in the range of normalized IEEE numbers is that the
512 *> spectral condition number kappa(A)=sigma_max(A)/sigma_min(A) does not
513 *> overflow. This code (CGEJSV) is best used in this restricted range,
514 *> meaning that singular values of magnitude below ||A||_2 / SLAMCH('O') are
515 *> returned as zeros. See JOBR for details on this.
516 *> Further, this implementation is somewhat slower than the one described
517 *> in [1,2] due to replacement of some non-LAPACK components, and because
518 *> the choice of some tuning parameters in the iterative part (CGESVJ) is
519 *> left to the implementer on a particular machine.
520 *> The rank revealing QR factorization (in this code: CGEQP3) should be
521 *> implemented as in [3]. We have a new version of CGEQP3 under development
522 *> that is more robust than the current one in LAPACK, with a cleaner cut in
523 *> rank deficient cases. It will be available in the SIGMA library [4].
524 *> If M is much larger than N, it is obvious that the initial QRF with
525 *> column pivoting can be preprocessed by the QRF without pivoting. That
526 *> well known trick is not used in CGEJSV because in some cases heavy row
527 *> weighting can be treated with complete pivoting. The overhead in cases
528 *> M much larger than N is then only due to pivoting, but the benefits in
529 *> terms of accuracy have prevailed. The implementer/user can incorporate
530 *> this extra QRF step easily. The implementer can also improve data movement
531 *> (matrix transpose, matrix copy, matrix transposed copy) - this
532 *> implementation of CGEJSV uses only the simplest, naive data movement.
533 *> \endverbatim
534 *
535 *> \par Contributor:
536 * ==================
537 *>
538 *> Zlatko Drmac (Zagreb, Croatia)
539 *
540 *> \par References:
541 * ================
542 *>
543 *> \verbatim
544 *>
545 *> [1] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm I.
546 *> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1322-1342.
547 *> LAPACK Working note 169.
548 *> [2] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm II.
549 *> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1343-1362.
550 *> LAPACK Working note 170.
551 *> [3] Z. Drmac and Z. Bujanovic: On the failure of rank-revealing QR
552 *> factorization software - a case study.
553 *> ACM Trans. Math. Softw. Vol. 35, No 2 (2008), pp. 1-28.
554 *> LAPACK Working note 176.
555 *> [4] Z. Drmac: SIGMA - mathematical software library for accurate SVD, PSV,
556 *> QSVD, (H,K)-SVD computations.
557 *> Department of Mathematics, University of Zagreb, 2008, 2016.
558 *> \endverbatim
559 *
560 *> \par Bugs, examples and comments:
561 * =================================
562 *>
563 *> Please report all bugs and send interesting examples and/or comments to
564 *> [email protected]. Thank you.
565 *>
566 * =====================================================================
567  SUBROUTINE cgejsv( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
568  $ m, n, a, lda, sva, u, ldu, v, ldv,
569  $ cwork, lwork, rwork, lrwork, iwork, info )
570 *
571 * -- LAPACK computational routine (version 3.7.0) --
572 * -- LAPACK is a software package provided by Univ. of Tennessee, --
573 * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
574 * December 2016
575 *
576 * .. Scalar Arguments ..
577  IMPLICIT NONE
578  INTEGER INFO, LDA, LDU, LDV, LWORK, LRWORK, M, N
579 * ..
580 * .. Array Arguments ..
581  COMPLEX A( lda, * ), U( ldu, * ), V( ldv, * ), CWORK( lwork )
582  REAL SVA( n ), RWORK( lrwork )
583  INTEGER IWORK( * )
584  CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
585 * ..
586 *
587 * ===========================================================================
588 *
589 * .. Local Parameters ..
590  REAL ZERO, ONE
591  parameter( zero = 0.0e0, one = 1.0e0 )
592  COMPLEX CZERO, CONE
593  parameter( czero = ( 0.0e0, 0.0e0 ), cone = ( 1.0e0, 0.0e0 ) )
594 * ..
595 * .. Local Scalars ..
596  COMPLEX CTEMP
597  REAL AAPP, AAQQ, AATMAX, AATMIN, BIG, BIG1, COND_OK,
598  $ condr1, condr2, entra, entrat, epsln, maxprj, scalem,
599  $ sconda, sfmin, small, temp1, uscal1, uscal2, xsc
600  INTEGER IERR, N1, NR, NUMRANK, p, q, WARNING
601  LOGICAL ALMORT, DEFR, ERREST, GOSCAL, JRACC, KILL, LQUERY,
602  $ lsvec, l2aber, l2kill, l2pert, l2rank, l2tran, noscal,
603  $ rowpiv, rsvec, transp
604 *
605  INTEGER OPTWRK, MINWRK, MINRWRK, MINIWRK
606  INTEGER LWCON, LWLQF, LWQP3, LWQRF, LWUNMLQ, LWUNMQR, LWUNMQRM,
607  $ lwsvdj, lwsvdjv, lrwqp3, lrwcon, lrwsvdj, iwoff
608  INTEGER LWRK_CGELQF, LWRK_CGEQP3, LWRK_CGEQP3N, LWRK_CGEQRF,
609  $ lwrk_cgesvj, lwrk_cgesvjv, lwrk_cgesvju, lwrk_cunmlq,
610  $ lwrk_cunmqr, lwrk_cunmqrm
611 * ..
612 * .. Local Arrays
613  COMPLEX CDUMMY(1)
614  REAL RDUMMY(1)
615 *
616 * .. Intrinsic Functions ..
617  INTRINSIC abs, cmplx, conjg, alog, max, min, REAL, NINT, SQRT
618 * ..
619 * .. External Functions ..
620  REAL SLAMCH, SCNRM2
621  INTEGER ISAMAX, ICAMAX
622  LOGICAL LSAME
623  EXTERNAL isamax, icamax, lsame, slamch, scnrm2
624 * ..
625 * .. External Subroutines ..
626  EXTERNAL slassq, ccopy, cgelqf, cgeqp3, cgeqrf, clacpy, clapmr,
629  $ xerbla
630 *
631  EXTERNAL cgesvj
632 * ..
633 *
634 * Test the input arguments
635 *
636  lsvec = lsame( jobu, 'U' ) .OR. lsame( jobu, 'F' )
637  jracc = lsame( jobv, 'J' )
638  rsvec = lsame( jobv, 'V' ) .OR. jracc
639  rowpiv = lsame( joba, 'F' ) .OR. lsame( joba, 'G' )
640  l2rank = lsame( joba, 'R' )
641  l2aber = lsame( joba, 'A' )
642  errest = lsame( joba, 'E' ) .OR. lsame( joba, 'G' )
643  l2tran = lsame( jobt, 'T' ) .AND. ( m .EQ. n )
644  l2kill = lsame( jobr, 'R' )
645  defr = lsame( jobr, 'N' )
646  l2pert = lsame( jobp, 'P' )
647 *
648  lquery = ( lwork .EQ. -1 ) .OR. ( lrwork .EQ. -1 )
649 *
650  IF ( .NOT.(rowpiv .OR. l2rank .OR. l2aber .OR.
651  $ errest .OR. lsame( joba, 'C' ) )) THEN
652  info = - 1
653  ELSE IF ( .NOT.( lsvec .OR. lsame( jobu, 'N' ) .OR.
654  $ ( lsame( jobu, 'W' ) .AND. rsvec .AND. l2tran ) ) ) THEN
655  info = - 2
656  ELSE IF ( .NOT.( rsvec .OR. lsame( jobv, 'N' ) .OR.
657  $ ( lsame( jobv, 'W' ) .AND. lsvec .AND. l2tran ) ) ) THEN
658  info = - 3
659  ELSE IF ( .NOT. ( l2kill .OR. defr ) ) THEN
660  info = - 4
661  ELSE IF ( .NOT. ( lsame(jobt,'T') .OR. lsame(jobt,'N') ) ) THEN
662  info = - 5
663  ELSE IF ( .NOT. ( l2pert .OR. lsame( jobp, 'N' ) ) ) THEN
664  info = - 6
665  ELSE IF ( m .LT. 0 ) THEN
666  info = - 7
667  ELSE IF ( ( n .LT. 0 ) .OR. ( n .GT. m ) ) THEN
668  info = - 8
669  ELSE IF ( lda .LT. m ) THEN
670  info = - 10
671  ELSE IF ( lsvec .AND. ( ldu .LT. m ) ) THEN
672  info = - 13
673  ELSE IF ( rsvec .AND. ( ldv .LT. n ) ) THEN
674  info = - 15
675  ELSE
676 * #:)
677  info = 0
678  END IF
679 *
680  IF ( info .EQ. 0 ) THEN
681 * .. compute the minimal and the optimal workspace lengths
682 * [[The expressions for computing the minimal and the optimal
683 * values of LCWORK, LRWORK are written with a lot of redundancy and
684 * can be simplified. However, this verbose form is useful for
685 * maintenance and modifications of the code.]]
686 *
687 * .. minimal workspace length for CGEQP3 of an M x N matrix,
688 * CGEQRF of an N x N matrix, CGELQF of an N x N matrix,
689 * CUNMLQ for computing N x N matrix, CUNMQR for computing N x N
690 * matrix, CUNMQR for computing M x N matrix, respectively.
691  lwqp3 = n+1
692  lwqrf = max( 1, n )
693  lwlqf = max( 1, n )
694  lwunmlq = max( 1, n )
695  lwunmqr = max( 1, n )
696  lwunmqrm = max( 1, m )
697 * .. minimal workspace length for CPOCON of an N x N matrix
698  lwcon = 2 * n
699 * .. minimal workspace length for CGESVJ of an N x N matrix,
700 * without and with explicit accumulation of Jacobi rotations
701  lwsvdj = max( 2 * n, 1 )
702  lwsvdjv = max( 2 * n, 1 )
703 * .. minimal REAL workspace length for CGEQP3, CPOCON, CGESVJ
704  lrwqp3 = n
705  lrwcon = n
706  lrwsvdj = n
707  IF ( lquery ) THEN
708  CALL cgeqp3( m, n, a, lda, iwork, cdummy, cdummy, -1,
709  $ rdummy, ierr )
710  lwrk_cgeqp3 = cdummy(1)
711  CALL cgeqrf( n, n, a, lda, cdummy, cdummy,-1, ierr )
712  lwrk_cgeqrf = cdummy(1)
713  CALL cgelqf( n, n, a, lda, cdummy, cdummy,-1, ierr )
714  lwrk_cgelqf = cdummy(1)
715  END IF
716  minwrk = 2
717  optwrk = 2
718  miniwrk = n
719  IF ( .NOT. (lsvec .OR. rsvec ) ) THEN
720 * .. minimal and optimal sizes of the complex workspace if
721 * only the singular values are requested
722  IF ( errest ) THEN
723  minwrk = max( n+lwqp3, n**2+lwcon, n+lwqrf, lwsvdj )
724  ELSE
725  minwrk = max( n+lwqp3, n+lwqrf, lwsvdj )
726  END IF
727  IF ( lquery ) THEN
728  CALL cgesvj( 'L', 'N', 'N', n, n, a, lda, sva, n, v,
729  $ ldv, cdummy, -1, rdummy, -1, ierr )
730  lwrk_cgesvj = cdummy(1)
731  IF ( errest ) THEN
732  optwrk = max( n+lwrk_cgeqp3, n**2+lwcon,
733  $ n+lwrk_cgeqrf, lwrk_cgesvj )
734  ELSE
735  optwrk = max( n+lwrk_cgeqp3, n+lwrk_cgeqrf,
736  $ lwrk_cgesvj )
737  END IF
738  END IF
739  IF ( l2tran .OR. rowpiv ) THEN
740  IF ( errest ) THEN
741  minrwrk = max( 7, 2*m, lrwqp3, lrwcon, lrwsvdj )
742  ELSE
743  minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj )
744  END IF
745  ELSE
746  IF ( errest ) THEN
747  minrwrk = max( 7, lrwqp3, lrwcon, lrwsvdj )
748  ELSE
749  minrwrk = max( 7, lrwqp3, lrwsvdj )
750  END IF
751  END IF
752  IF ( rowpiv .OR. l2tran ) miniwrk = miniwrk + m
753  ELSE IF ( rsvec .AND. (.NOT.lsvec) ) THEN
754 * .. minimal and optimal sizes of the complex workspace if the
755 * singular values and the right singular vectors are requested
756  IF ( errest ) THEN
757  minwrk = max( n+lwqp3, lwcon, lwsvdj, n+lwlqf,
758  $ 2*n+lwqrf, n+lwsvdj, n+lwunmlq )
759  ELSE
760  minwrk = max( n+lwqp3, lwsvdj, n+lwlqf, 2*n+lwqrf,
761  $ n+lwsvdj, n+lwunmlq )
762  END IF
763  IF ( lquery ) THEN
764  CALL cgesvj( 'L', 'U', 'N', n,n, u, ldu, sva, n, a,
765  $ lda, cdummy, -1, rdummy, -1, ierr )
766  lwrk_cgesvj = cdummy(1)
767  CALL cunmlq( 'L', 'C', n, n, n, a, lda, cdummy,
768  $ v, ldv, cdummy, -1, ierr )
769  lwrk_cunmlq = cdummy(1)
770  IF ( errest ) THEN
771  optwrk = max( n+lwrk_cgeqp3, lwcon, lwrk_cgesvj,
772  $ n+lwrk_cgelqf, 2*n+lwrk_cgeqrf,
773  $ n+lwrk_cgesvj, n+lwrk_cunmlq )
774  ELSE
775  optwrk = max( n+lwrk_cgeqp3, lwrk_cgesvj,n+lwrk_cgelqf,
776  $ 2*n+lwrk_cgeqrf, n+lwrk_cgesvj,
777  $ n+lwrk_cunmlq )
778  END IF
779  END IF
780  IF ( l2tran .OR. rowpiv ) THEN
781  IF ( errest ) THEN
782  minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj, lrwcon )
783  ELSE
784  minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj )
785  END IF
786  ELSE
787  IF ( errest ) THEN
788  minrwrk = max( 7, lrwqp3, lrwsvdj, lrwcon )
789  ELSE
790  minrwrk = max( 7, lrwqp3, lrwsvdj )
791  END IF
792  END IF
793  IF ( rowpiv .OR. l2tran ) miniwrk = miniwrk + m
794  ELSE IF ( lsvec .AND. (.NOT.rsvec) ) THEN
795 * .. minimal and optimal sizes of the complex workspace if the
796 * singular values and the left singular vectors are requested
797  IF ( errest ) THEN
798  minwrk = n + max( lwqp3,lwcon,n+lwqrf,lwsvdj,lwunmqrm )
799  ELSE
800  minwrk = n + max( lwqp3, n+lwqrf, lwsvdj, lwunmqrm )
801  END IF
802  IF ( lquery ) THEN
803  CALL cgesvj( 'L', 'U', 'N', n,n, u, ldu, sva, n, a,
804  $ lda, cdummy, -1, rdummy, -1, ierr )
805  lwrk_cgesvj = cdummy(1)
806  CALL cunmqr( 'L', 'N', m, n, n, a, lda, cdummy, u,
807  $ ldu, cdummy, -1, ierr )
808  lwrk_cunmqrm = cdummy(1)
809  IF ( errest ) THEN
810  optwrk = n + max( lwrk_cgeqp3, lwcon, n+lwrk_cgeqrf,
811  $ lwrk_cgesvj, lwrk_cunmqrm )
812  ELSE
813  optwrk = n + max( lwrk_cgeqp3, n+lwrk_cgeqrf,
814  $ lwrk_cgesvj, lwrk_cunmqrm )
815  END IF
816  END IF
817  IF ( l2tran .OR. rowpiv ) THEN
818  IF ( errest ) THEN
819  minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj, lrwcon )
820  ELSE
821  minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj )
822  END IF
823  ELSE
824  IF ( errest ) THEN
825  minrwrk = max( 7, lrwqp3, lrwsvdj, lrwcon )
826  ELSE
827  minrwrk = max( 7, lrwqp3, lrwsvdj )
828  END IF
829  END IF
830  IF ( rowpiv .OR. l2tran ) miniwrk = miniwrk + m
831  ELSE
832 * .. minimal and optimal sizes of the complex workspace if the
833 * full SVD is requested
834  IF ( .NOT. jracc ) THEN
835  IF ( errest ) THEN
836  minwrk = max( n+lwqp3, n+lwcon, 2*n+n**2+lwcon,
837  $ 2*n+lwqrf, 2*n+lwqp3,
838  $ 2*n+n**2+n+lwlqf, 2*n+n**2+n+n**2+lwcon,
839  $ 2*n+n**2+n+lwsvdj, 2*n+n**2+n+lwsvdjv,
840  $ 2*n+n**2+n+lwunmqr,2*n+n**2+n+lwunmlq,
841  $ n+n**2+lwsvdj, n+lwunmqrm )
842  ELSE
843  minwrk = max( n+lwqp3, 2*n+n**2+lwcon,
844  $ 2*n+lwqrf, 2*n+lwqp3,
845  $ 2*n+n**2+n+lwlqf, 2*n+n**2+n+n**2+lwcon,
846  $ 2*n+n**2+n+lwsvdj, 2*n+n**2+n+lwsvdjv,
847  $ 2*n+n**2+n+lwunmqr,2*n+n**2+n+lwunmlq,
848  $ n+n**2+lwsvdj, n+lwunmqrm )
849  END IF
850  miniwrk = miniwrk + n
851  IF ( rowpiv .OR. l2tran ) miniwrk = miniwrk + m
852  ELSE
853  IF ( errest ) THEN
854  minwrk = max( n+lwqp3, n+lwcon, 2*n+lwqrf,
855  $ 2*n+n**2+lwsvdjv, 2*n+n**2+n+lwunmqr,
856  $ n+lwunmqrm )
857  ELSE
858  minwrk = max( n+lwqp3, 2*n+lwqrf,
859  $ 2*n+n**2+lwsvdjv, 2*n+n**2+n+lwunmqr,
860  $ n+lwunmqrm )
861  END IF
862  IF ( rowpiv .OR. l2tran ) miniwrk = miniwrk + m
863  END IF
864  IF ( lquery ) THEN
865  CALL cunmqr( 'L', 'N', m, n, n, a, lda, cdummy, u,
866  $ ldu, cdummy, -1, ierr )
867  lwrk_cunmqrm = cdummy(1)
868  CALL cunmqr( 'L', 'N', n, n, n, a, lda, cdummy, u,
869  $ ldu, cdummy, -1, ierr )
870  lwrk_cunmqr = cdummy(1)
871  IF ( .NOT. jracc ) THEN
872  CALL cgeqp3( n,n, a, lda, iwork, cdummy,cdummy, -1,
873  $ rdummy, ierr )
874  lwrk_cgeqp3n = cdummy(1)
875  CALL cgesvj( 'L', 'U', 'N', n, n, u, ldu, sva,
876  $ n, v, ldv, cdummy, -1, rdummy, -1, ierr )
877  lwrk_cgesvj = cdummy(1)
878  CALL cgesvj( 'U', 'U', 'N', n, n, u, ldu, sva,
879  $ n, v, ldv, cdummy, -1, rdummy, -1, ierr )
880  lwrk_cgesvju = cdummy(1)
881  CALL cgesvj( 'L', 'U', 'V', n, n, u, ldu, sva,
882  $ n, v, ldv, cdummy, -1, rdummy, -1, ierr )
883  lwrk_cgesvjv = cdummy(1)
884  CALL cunmlq( 'L', 'C', n, n, n, a, lda, cdummy,
885  $ v, ldv, cdummy, -1, ierr )
886  lwrk_cunmlq = cdummy(1)
887  IF ( errest ) THEN
888  optwrk = max( n+lwrk_cgeqp3, n+lwcon,
889  $ 2*n+n**2+lwcon, 2*n+lwrk_cgeqrf,
890  $ 2*n+lwrk_cgeqp3n,
891  $ 2*n+n**2+n+lwrk_cgelqf,
892  $ 2*n+n**2+n+n**2+lwcon,
893  $ 2*n+n**2+n+lwrk_cgesvj,
894  $ 2*n+n**2+n+lwrk_cgesvjv,
895  $ 2*n+n**2+n+lwrk_cunmqr,
896  $ 2*n+n**2+n+lwrk_cunmlq,
897  $ n+n**2+lwrk_cgesvju,
898  $ n+lwrk_cunmqrm )
899  ELSE
900  optwrk = max( n+lwrk_cgeqp3,
901  $ 2*n+n**2+lwcon, 2*n+lwrk_cgeqrf,
902  $ 2*n+lwrk_cgeqp3n,
903  $ 2*n+n**2+n+lwrk_cgelqf,
904  $ 2*n+n**2+n+n**2+lwcon,
905  $ 2*n+n**2+n+lwrk_cgesvj,
906  $ 2*n+n**2+n+lwrk_cgesvjv,
907  $ 2*n+n**2+n+lwrk_cunmqr,
908  $ 2*n+n**2+n+lwrk_cunmlq,
909  $ n+n**2+lwrk_cgesvju,
910  $ n+lwrk_cunmqrm )
911  END IF
912  ELSE
913  CALL cgesvj( 'L', 'U', 'V', n, n, u, ldu, sva,
914  $ n, v, ldv, cdummy, -1, rdummy, -1, ierr )
915  lwrk_cgesvjv = cdummy(1)
916  CALL cunmqr( 'L', 'N', n, n, n, cdummy, n, cdummy,
917  $ v, ldv, cdummy, -1, ierr )
918  lwrk_cunmqr = cdummy(1)
919  CALL cunmqr( 'L', 'N', m, n, n, a, lda, cdummy, u,
920  $ ldu, cdummy, -1, ierr )
921  lwrk_cunmqrm = cdummy(1)
922  IF ( errest ) THEN
923  optwrk = max( n+lwrk_cgeqp3, n+lwcon,
924  $ 2*n+lwrk_cgeqrf, 2*n+n**2,
925  $ 2*n+n**2+lwrk_cgesvjv,
926  $ 2*n+n**2+n+lwrk_cunmqr,n+lwrk_cunmqrm )
927  ELSE
928  optwrk = max( n+lwrk_cgeqp3, 2*n+lwrk_cgeqrf,
929  $ 2*n+n**2, 2*n+n**2+lwrk_cgesvjv,
930  $ 2*n+n**2+n+lwrk_cunmqr,
931  $ n+lwrk_cunmqrm )
932  END IF
933  END IF
934  END IF
935  IF ( l2tran .OR. rowpiv ) THEN
936  minrwrk = max( 7, 2*m, lrwqp3, lrwsvdj, lrwcon )
937  ELSE
938  minrwrk = max( 7, lrwqp3, lrwsvdj, lrwcon )
939  END IF
940  END IF
941  minwrk = max( 2, minwrk )
942  optwrk = max( 2, optwrk )
943  IF ( lwork .LT. minwrk .AND. (.NOT.lquery) ) info = - 17
944  IF ( lrwork .LT. minrwrk .AND. (.NOT.lquery) ) info = - 19
945  END IF
946 *
947  IF ( info .NE. 0 ) THEN
948 * #:(
949  CALL xerbla( 'CGEJSV', - info )
950  RETURN
951  ELSE IF ( lquery ) THEN
952  cwork(1) = optwrk
953  cwork(2) = minwrk
954  rwork(1) = minrwrk
955  iwork(1) = max( 4, miniwrk )
956  RETURN
957  END IF
958 *
959 * Quick return for void matrix (Y3K safe)
960 * #:)
961  IF ( ( m .EQ. 0 ) .OR. ( n .EQ. 0 ) ) THEN
962  iwork(1:4) = 0
963  rwork(1:7) = 0
964  RETURN
965  ENDIF
966 *
967 * Determine whether the matrix U should be M x N or M x M
968 *
969  IF ( lsvec ) THEN
970  n1 = n
971  IF ( lsame( jobu, 'F' ) ) n1 = m
972  END IF
973 *
974 * Set numerical parameters
975 *
976 *! NOTE: Make sure SLAMCH() does not fail on the target architecture.
977 *
978  epsln = slamch('Epsilon')
979  sfmin = slamch('SafeMinimum')
980  small = sfmin / epsln
981  big = slamch('O')
982 * BIG = ONE / SFMIN
983 *
984 * Initialize SVA(1:N) = diag( ||A e_i||_2 )_1^N
985 *
986 *(!) If necessary, scale SVA() to protect the largest norm from
987 * overflow. It is possible that this scaling pushes the smallest
988 * column norm left from the underflow threshold (extreme case).
989 *
990  scalem = one / sqrt(REAL(m)*REAL(n))
991  noscal = .true.
992  goscal = .true.
993  DO 1874 p = 1, n
994  aapp = zero
995  aaqq = one
996  CALL classq( m, a(1,p), 1, aapp, aaqq )
997  IF ( aapp .GT. big ) THEN
998  info = - 9
999  CALL xerbla( 'CGEJSV', -info )
1000  RETURN
1001  END IF
1002  aaqq = sqrt(aaqq)
1003  IF ( ( aapp .LT. (big / aaqq) ) .AND. noscal ) THEN
1004  sva(p) = aapp * aaqq
1005  ELSE
1006  noscal = .false.
1007  sva(p) = aapp * ( aaqq * scalem )
1008  IF ( goscal ) THEN
1009  goscal = .false.
1010  CALL sscal( p-1, scalem, sva, 1 )
1011  END IF
1012  END IF
1013  1874 CONTINUE
1014 *
1015  IF ( noscal ) scalem = one
1016 *
1017  aapp = zero
1018  aaqq = big
1019  DO 4781 p = 1, n
1020  aapp = max( aapp, sva(p) )
1021  IF ( sva(p) .NE. zero ) aaqq = min( aaqq, sva(p) )
1022  4781 CONTINUE
1023 *
1024 * Quick return for zero M x N matrix
1025 * #:)
1026  IF ( aapp .EQ. zero ) THEN
1027  IF ( lsvec ) CALL claset( 'G', m, n1, czero, cone, u, ldu )
1028  IF ( rsvec ) CALL claset( 'G', n, n, czero, cone, v, ldv )
1029  rwork(1) = one
1030  rwork(2) = one
1031  IF ( errest ) rwork(3) = one
1032  IF ( lsvec .AND. rsvec ) THEN
1033  rwork(4) = one
1034  rwork(5) = one
1035  END IF
1036  IF ( l2tran ) THEN
1037  rwork(6) = zero
1038  rwork(7) = zero
1039  END IF
1040  iwork(1) = 0
1041  iwork(2) = 0
1042  iwork(3) = 0
1043  iwork(4) = -1
1044  RETURN
1045  END IF
1046 *
1047 * Issue warning if denormalized column norms detected. Override the
1048 * high relative accuracy request. Issue licence to kill nonzero columns
1049 * (set them to zero) whose norm is less than sigma_max / BIG (roughly).
1050 * #:(
1051  warning = 0
1052  IF ( aaqq .LE. sfmin ) THEN
1053  l2rank = .true.
1054  l2kill = .true.
1055  warning = 1
1056  END IF
1057 *
1058 * Quick return for one-column matrix
1059 * #:)
1060  IF ( n .EQ. 1 ) THEN
1061 *
1062  IF ( lsvec ) THEN
1063  CALL clascl( 'G',0,0,sva(1),scalem, m,1,a(1,1),lda,ierr )
1064  CALL clacpy( 'A', m, 1, a, lda, u, ldu )
1065 * computing all M left singular vectors of the M x 1 matrix
1066  IF ( n1 .NE. n ) THEN
1067  CALL cgeqrf( m, n, u,ldu, cwork, cwork(n+1),lwork-n,ierr )
1068  CALL cungqr( m,n1,1, u,ldu,cwork,cwork(n+1),lwork-n,ierr )
1069  CALL ccopy( m, a(1,1), 1, u(1,1), 1 )
1070  END IF
1071  END IF
1072  IF ( rsvec ) THEN
1073  v(1,1) = cone
1074  END IF
1075  IF ( sva(1) .LT. (big*scalem) ) THEN
1076  sva(1) = sva(1) / scalem
1077  scalem = one
1078  END IF
1079  rwork(1) = one / scalem
1080  rwork(2) = one
1081  IF ( sva(1) .NE. zero ) THEN
1082  iwork(1) = 1
1083  IF ( ( sva(1) / scalem) .GE. sfmin ) THEN
1084  iwork(2) = 1
1085  ELSE
1086  iwork(2) = 0
1087  END IF
1088  ELSE
1089  iwork(1) = 0
1090  iwork(2) = 0
1091  END IF
1092  iwork(3) = 0
1093  iwork(4) = -1
1094  IF ( errest ) rwork(3) = one
1095  IF ( lsvec .AND. rsvec ) THEN
1096  rwork(4) = one
1097  rwork(5) = one
1098  END IF
1099  IF ( l2tran ) THEN
1100  rwork(6) = zero
1101  rwork(7) = zero
1102  END IF
1103  RETURN
1104 *
1105  END IF
1106 *
1107  transp = .false.
1108 *
1109  aatmax = -one
1110  aatmin = big
1111  IF ( rowpiv .OR. l2tran ) THEN
1112 *
1113 * Compute the row norms, needed to determine row pivoting sequence
1114 * (in the case of heavily row weighted A, row pivoting is strongly
1115 * advised) and to collect information needed to compare the
1116 * structures of A * A^* and A^* * A (in the case L2TRAN.EQ..TRUE.).
1117 *
1118  IF ( l2tran ) THEN
1119  DO 1950 p = 1, m
1120  xsc = zero
1121  temp1 = one
1122  CALL classq( n, a(p,1), lda, xsc, temp1 )
1123 * CLASSQ gets both the ell_2 and the ell_infinity norm
1124 * in one pass through the vector
1125  rwork(m+p) = xsc * scalem
1126  rwork(p) = xsc * (scalem*sqrt(temp1))
1127  aatmax = max( aatmax, rwork(p) )
1128  IF (rwork(p) .NE. zero)
1129  $ aatmin = min(aatmin,rwork(p))
1130  1950 CONTINUE
1131  ELSE
1132  DO 1904 p = 1, m
1133  rwork(m+p) = scalem*abs( a(p,icamax(n,a(p,1),lda)) )
1134  aatmax = max( aatmax, rwork(m+p) )
1135  aatmin = min( aatmin, rwork(m+p) )
1136  1904 CONTINUE
1137  END IF
1138 *
1139  END IF
1140 *
1141 * For square matrix A try to determine whether A^* would be better
1142 * input for the preconditioned Jacobi SVD, with faster convergence.
1143 * The decision is based on an O(N) function of the vector of column
1144 * and row norms of A, based on the Shannon entropy. This should give
1145 * the right choice in most cases when the difference actually matters.
1146 * It may fail and pick the slower converging side.
1147 *
1148  entra = zero
1149  entrat = zero
1150  IF ( l2tran ) THEN
1151 *
1152  xsc = zero
1153  temp1 = one
1154  CALL slassq( n, sva, 1, xsc, temp1 )
1155  temp1 = one / temp1
1156 *
1157  entra = zero
1158  DO 1113 p = 1, n
1159  big1 = ( ( sva(p) / xsc )**2 ) * temp1
1160  IF ( big1 .NE. zero ) entra = entra + big1 * alog(big1)
1161  1113 CONTINUE
1162  entra = - entra / alog(REAL(n))
1163 *
1164 * Now, SVA().^2/Trace(A^* * A) is a point in the probability simplex.
1165 * It is derived from the diagonal of A^* * A. Do the same with the
1166 * diagonal of A * A^*, compute the entropy of the corresponding
1167 * probability distribution. Note that A * A^* and A^* * A have the
1168 * same trace.
1169 *
1170  entrat = zero
1171  DO 1114 p = 1, m
1172  big1 = ( ( rwork(p) / xsc )**2 ) * temp1
1173  IF ( big1 .NE. zero ) entrat = entrat + big1 * alog(big1)
1174  1114 CONTINUE
1175  entrat = - entrat / alog(REAL(m))
1176 *
1177 * Analyze the entropies and decide A or A^*. Smaller entropy
1178 * usually means better input for the algorithm.
1179 *
1180  transp = ( entrat .LT. entra )
1181 *
1182 * If A^* is better than A, take the adjoint of A. This is allowed
1183 * only for square matrices, M=N.
1184  IF ( transp ) THEN
1185 * In an optimal implementation, this trivial transpose
1186 * should be replaced with faster transpose.
1187  DO 1115 p = 1, n - 1
1188  a(p,p) = conjg(a(p,p))
1189  DO 1116 q = p + 1, n
1190  ctemp = conjg(a(q,p))
1191  a(q,p) = conjg(a(p,q))
1192  a(p,q) = ctemp
1193  1116 CONTINUE
1194  1115 CONTINUE
1195  a(n,n) = conjg(a(n,n))
1196  DO 1117 p = 1, n
1197  rwork(m+p) = sva(p)
1198  sva(p) = rwork(p)
1199 * previously computed row 2-norms are now column 2-norms
1200 * of the transposed matrix
1201  1117 CONTINUE
1202  temp1 = aapp
1203  aapp = aatmax
1204  aatmax = temp1
1205  temp1 = aaqq
1206  aaqq = aatmin
1207  aatmin = temp1
1208  kill = lsvec
1209  lsvec = rsvec
1210  rsvec = kill
1211  IF ( lsvec ) n1 = n
1212 *
1213  rowpiv = .true.
1214  END IF
1215 *
1216  END IF
1217 * END IF L2TRAN
1218 *
1219 * Scale the matrix so that its maximal singular value remains less
1220 * than SQRT(BIG) -- the matrix is scaled so that its maximal column
1221 * has Euclidean norm equal to SQRT(BIG/N). The only reason to keep
1222 * SQRT(BIG) instead of BIG is the fact that CGEJSV uses LAPACK and
1223 * BLAS routines that, in some implementations, are not capable of
1224 * working in the full interval [SFMIN,BIG] and that they may provoke
1225 * overflows in the intermediate results. If the singular values spread
1226 * from SFMIN to BIG, then CGESVJ will compute them. So, in that case,
1227 * one should use CGESVJ instead of CGEJSV.
1228  big1 = sqrt( big )
1229  temp1 = sqrt( big / REAL(N) )
1230 * >> for future updates: allow bigger range, i.e. the largest column
1231 * will be allowed up to BIG/N and CGESVJ will do the rest. However, for
1232 * this all other (LAPACK) components must allow such a range.
1233 * TEMP1 = BIG/REAL(N)
1234 * TEMP1 = BIG * EPSLN this should 'almost' work with current LAPACK components
1235  CALL slascl( 'G', 0, 0, aapp, temp1, n, 1, sva, n, ierr )
1236  IF ( aaqq .GT. (aapp * sfmin) ) THEN
1237  aaqq = ( aaqq / aapp ) * temp1
1238  ELSE
1239  aaqq = ( aaqq * temp1 ) / aapp
1240  END IF
1241  temp1 = temp1 * scalem
1242  CALL clascl( 'G', 0, 0, aapp, temp1, m, n, a, lda, ierr )
1243 *
1244 * To undo scaling at the end of this procedure, multiply the
1245 * computed singular values with USCAL2 / USCAL1.
1246 *
1247  uscal1 = temp1
1248  uscal2 = aapp
1249 *
1250  IF ( l2kill ) THEN
1251 * L2KILL enforces computation of nonzero singular values in
1252 * the restricted range of condition number of the initial A,
1253 * sigma_max(A) / sigma_min(A) approx. SQRT(BIG)/SQRT(SFMIN).
1254  xsc = sqrt( sfmin )
1255  ELSE
1256  xsc = small
1257 *
1258 * Now, if the condition number of A is too big,
1259 * sigma_max(A) / sigma_min(A) .GT. SQRT(BIG/N) * EPSLN / SFMIN,
1260 * as a precaution measure, the full SVD is computed using CGESVJ
1261 * with accumulated Jacobi rotations. This provides numerically
1262 * more robust computation, at the cost of slightly increased run
1263 * time. Depending on the concrete implementation of BLAS and LAPACK
1264 * (i.e. how they behave in presence of extreme ill-conditioning) the
1265 * implementor may decide to remove this switch.
1266  IF ( ( aaqq.LT.sqrt(sfmin) ) .AND. lsvec .AND. rsvec ) THEN
1267  jracc = .true.
1268  END IF
1269 *
1270  END IF
1271  IF ( aaqq .LT. xsc ) THEN
1272  DO 700 p = 1, n
1273  IF ( sva(p) .LT. xsc ) THEN
1274  CALL claset( 'A', m, 1, czero, czero, a(1,p), lda )
1275  sva(p) = zero
1276  END IF
1277  700 CONTINUE
1278  END IF
1279 *
1280 * Preconditioning using QR factorization with pivoting
1281 *
1282  IF ( rowpiv ) THEN
1283 * Optional row permutation (Bjoerck row pivoting):
1284 * A result by Cox and Higham shows that the Bjoerck's
1285 * row pivoting combined with standard column pivoting
1286 * has similar effect as Powell-Reid complete pivoting.
1287 * The ell-infinity norms of A are made nonincreasing.
1288  IF ( ( lsvec .AND. rsvec ) .AND. .NOT.( jracc ) ) THEN
1289  iwoff = 2*n
1290  ELSE
1291  iwoff = n
1292  END IF
1293  DO 1952 p = 1, m - 1
1294  q = isamax( m-p+1, rwork(m+p), 1 ) + p - 1
1295  iwork(iwoff+p) = q
1296  IF ( p .NE. q ) THEN
1297  temp1 = rwork(m+p)
1298  rwork(m+p) = rwork(m+q)
1299  rwork(m+q) = temp1
1300  END IF
1301  1952 CONTINUE
1302  CALL claswp( n, a, lda, 1, m-1, iwork(iwoff+1), 1 )
1303  END IF
1304 *
1305 * End of the preparation phase (scaling, optional sorting and
1306 * transposing, optional flushing of small columns).
1307 *
1308 * Preconditioning
1309 *
1310 * If the full SVD is needed, the right singular vectors are computed
1311 * from a matrix equation, and for that we need theoretical analysis
1312 * of the Businger-Golub pivoting. So we use CGEQP3 as the first RR QRF.
1313 * In all other cases the first RR QRF can be chosen by other criteria
1314 * (eg speed by replacing global with restricted window pivoting, such
1315 * as in xGEQPX from TOMS # 782). Good results will be obtained using
1316 * xGEQPX with properly (!) chosen numerical parameters.
1317 * Any improvement of CGEQP3 improves overal performance of CGEJSV.
1318 *
1319 * A * P1 = Q1 * [ R1^* 0]^*:
1320  DO 1963 p = 1, n
1321 * .. all columns are free columns
1322  iwork(p) = 0
1323  1963 CONTINUE
1324  CALL cgeqp3( m, n, a, lda, iwork, cwork, cwork(n+1), lwork-n,
1325  $ rwork, ierr )
1326 *
1327 * The upper triangular matrix R1 from the first QRF is inspected for
1328 * rank deficiency and possibilities for deflation, or possible
1329 * ill-conditioning. Depending on the user specified flag L2RANK,
1330 * the procedure explores possibilities to reduce the numerical
1331 * rank by inspecting the computed upper triangular factor. If
1332 * L2RANK or L2ABER are up, then CGEJSV will compute the SVD of
1333 * A + dA, where ||dA|| <= f(M,N)*EPSLN.
1334 *
1335  nr = 1
1336  IF ( l2aber ) THEN
1337 * Standard absolute error bound suffices. All sigma_i with
1338 * sigma_i < N*EPSLN*||A|| are flushed to zero. This is an
1339 * agressive enforcement of lower numerical rank by introducing a
1340 * backward error of the order of N*EPSLN*||A||.
1341  temp1 = sqrt(REAL(n))*epsln
1342  DO 3001 p = 2, n
1343  IF ( abs(a(p,p)) .GE. (temp1*abs(a(1,1))) ) THEN
1344  nr = nr + 1
1345  ELSE
1346  GO TO 3002
1347  END IF
1348  3001 CONTINUE
1349  3002 CONTINUE
1350  ELSE IF ( l2rank ) THEN
1351 * .. similarly as above, only slightly more gentle (less agressive).
1352 * Sudden drop on the diagonal of R1 is used as the criterion for
1353 * close-to-rank-defficient.
1354  temp1 = sqrt(sfmin)
1355  DO 3401 p = 2, n
1356  IF ( ( abs(a(p,p)) .LT. (epsln*abs(a(p-1,p-1))) ) .OR.
1357  $ ( abs(a(p,p)) .LT. small ) .OR.
1358  $ ( l2kill .AND. (abs(a(p,p)) .LT. temp1) ) ) GO TO 3402
1359  nr = nr + 1
1360  3401 CONTINUE
1361  3402 CONTINUE
1362 *
1363  ELSE
1364 * The goal is high relative accuracy. However, if the matrix
1365 * has high scaled condition number the relative accuracy is in
1366 * general not feasible. Later on, a condition number estimator
1367 * will be deployed to estimate the scaled condition number.
1368 * Here we just remove the underflowed part of the triangular
1369 * factor. This prevents the situation in which the code is
1370 * working hard to get the accuracy not warranted by the data.
1371  temp1 = sqrt(sfmin)
1372  DO 3301 p = 2, n
1373  IF ( ( abs(a(p,p)) .LT. small ) .OR.
1374  $ ( l2kill .AND. (abs(a(p,p)) .LT. temp1) ) ) GO TO 3302
1375  nr = nr + 1
1376  3301 CONTINUE
1377  3302 CONTINUE
1378 *
1379  END IF
1380 *
1381  almort = .false.
1382  IF ( nr .EQ. n ) THEN
1383  maxprj = one
1384  DO 3051 p = 2, n
1385  temp1 = abs(a(p,p)) / sva(iwork(p))
1386  maxprj = min( maxprj, temp1 )
1387  3051 CONTINUE
1388  IF ( maxprj**2 .GE. one - REAL(n)*EPSLN ) almort = .true.
1389  END IF
1390 *
1391 *
1392  sconda = - one
1393  condr1 = - one
1394  condr2 = - one
1395 *
1396  IF ( errest ) THEN
1397  IF ( n .EQ. nr ) THEN
1398  IF ( rsvec ) THEN
1399 * .. V is available as workspace
1400  CALL clacpy( 'U', n, n, a, lda, v, ldv )
1401  DO 3053 p = 1, n
1402  temp1 = sva(iwork(p))
1403  CALL csscal( p, one/temp1, v(1,p), 1 )
1404  3053 CONTINUE
1405  IF ( lsvec )THEN
1406  CALL cpocon( 'U', n, v, ldv, one, temp1,
1407  $ cwork(n+1), rwork, ierr )
1408  ELSE
1409  CALL cpocon( 'U', n, v, ldv, one, temp1,
1410  $ cwork, rwork, ierr )
1411  END IF
1412 *
1413  ELSE IF ( lsvec ) THEN
1414 * .. U is available as workspace
1415  CALL clacpy( 'U', n, n, a, lda, u, ldu )
1416  DO 3054 p = 1, n
1417  temp1 = sva(iwork(p))
1418  CALL csscal( p, one/temp1, u(1,p), 1 )
1419  3054 CONTINUE
1420  CALL cpocon( 'U', n, u, ldu, one, temp1,
1421  $ cwork(n+1), rwork, ierr )
1422  ELSE
1423  CALL clacpy( 'U', n, n, a, lda, cwork, n )
1424 *[] CALL CLACPY( 'U', N, N, A, LDA, CWORK(N+1), N )
1425 * Change: here index shifted by N to the left, CWORK(1:N)
1426 * not needed for SIGMA only computation
1427  DO 3052 p = 1, n
1428  temp1 = sva(iwork(p))
1429 *[] CALL CSSCAL( p, ONE/TEMP1, CWORK(N+(p-1)*N+1), 1 )
1430  CALL csscal( p, one/temp1, cwork((p-1)*n+1), 1 )
1431  3052 CONTINUE
1432 * .. the columns of R are scaled to have unit Euclidean lengths.
1433 *[] CALL CPOCON( 'U', N, CWORK(N+1), N, ONE, TEMP1,
1434 *[] $ CWORK(N+N*N+1), RWORK, IERR )
1435  CALL cpocon( 'U', n, cwork, n, one, temp1,
1436  $ cwork(n*n+1), rwork, ierr )
1437 *
1438  END IF
1439  IF ( temp1 .NE. zero ) THEN
1440  sconda = one / sqrt(temp1)
1441  ELSE
1442  sconda = - one
1443  END IF
1444 * SCONDA is an estimate of SQRT(||(R^* * R)^(-1)||_1).
1445 * N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
1446  ELSE
1447  sconda = - one
1448  END IF
1449  END IF
1450 *
1451  l2pert = l2pert .AND. ( abs( a(1,1)/a(nr,nr) ) .GT. sqrt(big1) )
1452 * If there is no violent scaling, artificial perturbation is not needed.
1453 *
1454 * Phase 3:
1455 *
1456  IF ( .NOT. ( rsvec .OR. lsvec ) ) THEN
1457 *
1458 * Singular Values only
1459 *
1460 * .. transpose A(1:NR,1:N)
1461  DO 1946 p = 1, min( n-1, nr )
1462  CALL ccopy( n-p, a(p,p+1), lda, a(p+1,p), 1 )
1463  CALL clacgv( n-p+1, a(p,p), 1 )
1464  1946 CONTINUE
1465  IF ( nr .EQ. n ) a(n,n) = conjg(a(n,n))
1466 *
1467 * The following two DO-loops introduce small relative perturbation
1468 * into the strict upper triangle of the lower triangular matrix.
1469 * Small entries below the main diagonal are also changed.
1470 * This modification is useful if the computing environment does not
1471 * provide/allow FLUSH TO ZERO underflow, for it prevents many
1472 * annoying denormalized numbers in case of strongly scaled matrices.
1473 * The perturbation is structured so that it does not introduce any
1474 * new perturbation of the singular values, and it does not destroy
1475 * the job done by the preconditioner.
1476 * The licence for this perturbation is in the variable L2PERT, which
1477 * should be .FALSE. if FLUSH TO ZERO underflow is active.
1478 *
1479  IF ( .NOT. almort ) THEN
1480 *
1481  IF ( l2pert ) THEN
1482 * XSC = SQRT(SMALL)
1483  xsc = epsln / REAL(N)
1484  DO 4947 q = 1, nr
1485  ctemp = cmplx(xsc*abs(a(q,q)),zero)
1486  DO 4949 p = 1, n
1487  IF ( ( (p.GT.q) .AND. (abs(a(p,q)).LE.temp1) )
1488  $ .OR. ( p .LT. q ) )
1489 * $ A(p,q) = TEMP1 * ( A(p,q) / ABS(A(p,q)) )
1490  $ a(p,q) = ctemp
1491  4949 CONTINUE
1492  4947 CONTINUE
1493  ELSE
1494  CALL claset( 'U', nr-1,nr-1, czero,czero, a(1,2),lda )
1495  END IF
1496 *
1497 * .. second preconditioning using the QR factorization
1498 *
1499  CALL cgeqrf( n,nr, a,lda, cwork, cwork(n+1),lwork-n, ierr )
1500 *
1501 * .. and transpose upper to lower triangular
1502  DO 1948 p = 1, nr - 1
1503  CALL ccopy( nr-p, a(p,p+1), lda, a(p+1,p), 1 )
1504  CALL clacgv( nr-p+1, a(p,p), 1 )
1505  1948 CONTINUE
1506 *
1507  END IF
1508 *
1509 * Row-cyclic Jacobi SVD algorithm with column pivoting
1510 *
1511 * .. again some perturbation (a "background noise") is added
1512 * to drown denormals
1513  IF ( l2pert ) THEN
1514 * XSC = SQRT(SMALL)
1515  xsc = epsln / REAL(n)
1516  DO 1947 q = 1, nr
1517  ctemp = cmplx(xsc*abs(a(q,q)),zero)
1518  DO 1949 p = 1, nr
1519  IF ( ( (p.GT.q) .AND. (abs(a(p,q)).LE.temp1) )
1520  $ .OR. ( p .LT. q ) )
1521 * $ A(p,q) = TEMP1 * ( A(p,q) / ABS(A(p,q)) )
1522  $ a(p,q) = ctemp
1523  1949 CONTINUE
1524  1947 CONTINUE
1525  ELSE
1526  CALL claset( 'U', nr-1, nr-1, czero, czero, a(1,2), lda )
1527  END IF
1528 *
1529 * .. and one-sided Jacobi rotations are started on a lower
1530 * triangular matrix (plus perturbation which is ignored in
1531 * the part which destroys triangular form (confusing?!))
1532 *
1533  CALL cgesvj( 'L', 'N', 'N', nr, nr, a, lda, sva,
1534  $ n, v, ldv, cwork, lwork, rwork, lrwork, info )
1535 *
1536  scalem = rwork(1)
1537  numrank = nint(rwork(2))
1538 *
1539 *
1540  ELSE IF ( ( rsvec .AND. ( .NOT. lsvec ) .AND. ( .NOT. jracc ) )
1541  $ .OR.
1542  $ ( jracc .AND. ( .NOT. lsvec ) .AND. ( nr .NE. n ) ) ) THEN
1543 *
1544 * -> Singular Values and Right Singular Vectors <-
1545 *
1546  IF ( almort ) THEN
1547 *
1548 * .. in this case NR equals N
1549  DO 1998 p = 1, nr
1550  CALL ccopy( n-p+1, a(p,p), lda, v(p,p), 1 )
1551  CALL clacgv( n-p+1, v(p,p), 1 )
1552  1998 CONTINUE
1553  CALL claset( 'U', nr-1,nr-1, czero, czero, v(1,2), ldv )
1554 *
1555  CALL cgesvj( 'L','U','N', n, nr, v, ldv, sva, nr, a, lda,
1556  $ cwork, lwork, rwork, lrwork, info )
1557  scalem = rwork(1)
1558  numrank = nint(rwork(2))
1559 
1560  ELSE
1561 *
1562 * .. two more QR factorizations ( one QRF is not enough, two require
1563 * accumulated product of Jacobi rotations, three are perfect )
1564 *
1565  CALL claset( 'L', nr-1,nr-1, czero, czero, a(2,1), lda )
1566  CALL cgelqf( nr,n, a, lda, cwork, cwork(n+1), lwork-n, ierr)
1567  CALL clacpy( 'L', nr, nr, a, lda, v, ldv )
1568  CALL claset( 'U', nr-1,nr-1, czero, czero, v(1,2), ldv )
1569  CALL cgeqrf( nr, nr, v, ldv, cwork(n+1), cwork(2*n+1),
1570  $ lwork-2*n, ierr )
1571  DO 8998 p = 1, nr
1572  CALL ccopy( nr-p+1, v(p,p), ldv, v(p,p), 1 )
1573  CALL clacgv( nr-p+1, v(p,p), 1 )
1574  8998 CONTINUE
1575  CALL claset('U', nr-1, nr-1, czero, czero, v(1,2), ldv)
1576 *
1577  CALL cgesvj( 'L', 'U','N', nr, nr, v,ldv, sva, nr, u,
1578  $ ldu, cwork(n+1), lwork-n, rwork, lrwork, info )
1579  scalem = rwork(1)
1580  numrank = nint(rwork(2))
1581  IF ( nr .LT. n ) THEN
1582  CALL claset( 'A',n-nr, nr, czero,czero, v(nr+1,1), ldv )
1583  CALL claset( 'A',nr, n-nr, czero,czero, v(1,nr+1), ldv )
1584  CALL claset( 'A',n-nr,n-nr,czero,cone, v(nr+1,nr+1),ldv )
1585  END IF
1586 *
1587  CALL cunmlq( 'L', 'C', n, n, nr, a, lda, cwork,
1588  $ v, ldv, cwork(n+1), lwork-n, ierr )
1589 *
1590  END IF
1591 * .. permute the rows of V
1592 * DO 8991 p = 1, N
1593 * CALL CCOPY( N, V(p,1), LDV, A(IWORK(p),1), LDA )
1594 * 8991 CONTINUE
1595 * CALL CLACPY( 'All', N, N, A, LDA, V, LDV )
1596  CALL clapmr( .false., n, n, v, ldv, iwork )
1597 *
1598  IF ( transp ) THEN
1599  CALL clacpy( 'A', n, n, v, ldv, u, ldu )
1600  END IF
1601 *
1602  ELSE IF ( jracc .AND. (.NOT. lsvec) .AND. ( nr.EQ. n ) ) THEN
1603 *
1604  CALL claset( 'L', n-1,n-1, czero, czero, a(2,1), lda )
1605 *
1606  CALL cgesvj( 'U','N','V', n, n, a, lda, sva, n, v, ldv,
1607  $ cwork, lwork, rwork, lrwork, info )
1608  scalem = rwork(1)
1609  numrank = nint(rwork(2))
1610  CALL clapmr( .false., n, n, v, ldv, iwork )
1611 *
1612  ELSE IF ( lsvec .AND. ( .NOT. rsvec ) ) THEN
1613 *
1614 * .. Singular Values and Left Singular Vectors ..
1615 *
1616 * .. second preconditioning step to avoid need to accumulate
1617 * Jacobi rotations in the Jacobi iterations.
1618  DO 1965 p = 1, nr
1619  CALL ccopy( n-p+1, a(p,p), lda, u(p,p), 1 )
1620  CALL clacgv( n-p+1, u(p,p), 1 )
1621  1965 CONTINUE
1622  CALL claset( 'U', nr-1, nr-1, czero, czero, u(1,2), ldu )
1623 *
1624  CALL cgeqrf( n, nr, u, ldu, cwork(n+1), cwork(2*n+1),
1625  $ lwork-2*n, ierr )
1626 *
1627  DO 1967 p = 1, nr - 1
1628  CALL ccopy( nr-p, u(p,p+1), ldu, u(p+1,p), 1 )
1629  CALL clacgv( n-p+1, u(p,p), 1 )
1630  1967 CONTINUE
1631  CALL claset( 'U', nr-1, nr-1, czero, czero, u(1,2), ldu )
1632 *
1633  CALL cgesvj( 'L', 'U', 'N', nr,nr, u, ldu, sva, nr, a,
1634  $ lda, cwork(n+1), lwork-n, rwork, lrwork, info )
1635  scalem = rwork(1)
1636  numrank = nint(rwork(2))
1637 *
1638  IF ( nr .LT. m ) THEN
1639  CALL claset( 'A', m-nr, nr,czero, czero, u(nr+1,1), ldu )
1640  IF ( nr .LT. n1 ) THEN
1641  CALL claset( 'A',nr, n1-nr, czero, czero, u(1,nr+1),ldu )
1642  CALL claset( 'A',m-nr,n1-nr,czero,cone,u(nr+1,nr+1),ldu )
1643  END IF
1644  END IF
1645 *
1646  CALL cunmqr( 'L', 'N', m, n1, n, a, lda, cwork, u,
1647  $ ldu, cwork(n+1), lwork-n, ierr )
1648 *
1649  IF ( rowpiv )
1650  $ CALL claswp( n1, u, ldu, 1, m-1, iwork(iwoff+1), -1 )
1651 *
1652  DO 1974 p = 1, n1
1653  xsc = one / scnrm2( m, u(1,p), 1 )
1654  CALL csscal( m, xsc, u(1,p), 1 )
1655  1974 CONTINUE
1656 *
1657  IF ( transp ) THEN
1658  CALL clacpy( 'A', n, n, u, ldu, v, ldv )
1659  END IF
1660 *
1661  ELSE
1662 *
1663 * .. Full SVD ..
1664 *
1665  IF ( .NOT. jracc ) THEN
1666 *
1667  IF ( .NOT. almort ) THEN
1668 *
1669 * Second Preconditioning Step (QRF [with pivoting])
1670 * Note that the composition of TRANSPOSE, QRF and TRANSPOSE is
1671 * equivalent to an LQF CALL. Since in many libraries the QRF
1672 * seems to be better optimized than the LQF, we do explicit
1673 * transpose and use the QRF. This is subject to changes in an
1674 * optimized implementation of CGEJSV.
1675 *
1676  DO 1968 p = 1, nr
1677  CALL ccopy( n-p+1, a(p,p), lda, v(p,p), 1 )
1678  CALL clacgv( n-p+1, v(p,p), 1 )
1679  1968 CONTINUE
1680 *
1681 * .. the following two loops perturb small entries to avoid
1682 * denormals in the second QR factorization, where they are
1683 * as good as zeros. This is done to avoid painfully slow
1684 * computation with denormals. The relative size of the perturbation
1685 * is a parameter that can be changed by the implementer.
1686 * This perturbation device will be obsolete on machines with
1687 * properly implemented arithmetic.
1688 * To switch it off, set L2PERT=.FALSE. To remove it from the
1689 * code, remove the action under L2PERT=.TRUE., leave the ELSE part.
1690 * The following two loops should be blocked and fused with the
1691 * transposed copy above.
1692 *
1693  IF ( l2pert ) THEN
1694  xsc = sqrt(small)
1695  DO 2969 q = 1, nr
1696  ctemp = cmplx(xsc*abs( v(q,q) ),zero)
1697  DO 2968 p = 1, n
1698  IF ( ( p .GT. q ) .AND. ( abs(v(p,q)) .LE. temp1 )
1699  $ .OR. ( p .LT. q ) )
1700 * $ V(p,q) = TEMP1 * ( V(p,q) / ABS(V(p,q)) )
1701  $ v(p,q) = ctemp
1702  IF ( p .LT. q ) v(p,q) = - v(p,q)
1703  2968 CONTINUE
1704  2969 CONTINUE
1705  ELSE
1706  CALL claset( 'U', nr-1, nr-1, czero, czero, v(1,2), ldv )
1707  END IF
1708 *
1709 * Estimate the row scaled condition number of R1
1710 * (If R1 is rectangular, N > NR, then the condition number
1711 * of the leading NR x NR submatrix is estimated.)
1712 *
1713  CALL clacpy( 'L', nr, nr, v, ldv, cwork(2*n+1), nr )
1714  DO 3950 p = 1, nr
1715  temp1 = scnrm2(nr-p+1,cwork(2*n+(p-1)*nr+p),1)
1716  CALL csscal(nr-p+1,one/temp1,cwork(2*n+(p-1)*nr+p),1)
1717  3950 CONTINUE
1718  CALL cpocon('L',nr,cwork(2*n+1),nr,one,temp1,
1719  $ cwork(2*n+nr*nr+1),rwork,ierr)
1720  condr1 = one / sqrt(temp1)
1721 * .. here need a second oppinion on the condition number
1722 * .. then assume worst case scenario
1723 * R1 is OK for inverse <=> CONDR1 .LT. REAL(N)
1724 * more conservative <=> CONDR1 .LT. SQRT(REAL(N))
1725 *
1726  cond_ok = sqrt(sqrt(REAL(nr)))
1727 *[TP] COND_OK is a tuning parameter.
1728 *
1729  IF ( condr1 .LT. cond_ok ) THEN
1730 * .. the second QRF without pivoting. Note: in an optimized
1731 * implementation, this QRF should be implemented as the QRF
1732 * of a lower triangular matrix.
1733 * R1^* = Q2 * R2
1734  CALL cgeqrf( n, nr, v, ldv, cwork(n+1), cwork(2*n+1),
1735  $ lwork-2*n, ierr )
1736 *
1737  IF ( l2pert ) THEN
1738  xsc = sqrt(small)/epsln
1739  DO 3959 p = 2, nr
1740  DO 3958 q = 1, p - 1
1741  ctemp=cmplx(xsc*min(abs(v(p,p)),abs(v(q,q))),
1742  $ zero)
1743  IF ( abs(v(q,p)) .LE. temp1 )
1744 * $ V(q,p) = TEMP1 * ( V(q,p) / ABS(V(q,p)) )
1745  $ v(q,p) = ctemp
1746  3958 CONTINUE
1747  3959 CONTINUE
1748  END IF
1749 *
1750  IF ( nr .NE. n )
1751  $ CALL clacpy( 'A', n, nr, v, ldv, cwork(2*n+1), n )
1752 * .. save ...
1753 *
1754 * .. this transposed copy should be better than naive
1755  DO 1969 p = 1, nr - 1
1756  CALL ccopy( nr-p, v(p,p+1), ldv, v(p+1,p), 1 )
1757  CALL clacgv(nr-p+1, v(p,p), 1 )
1758  1969 CONTINUE
1759  v(nr,nr)=conjg(v(nr,nr))
1760 *
1761  condr2 = condr1
1762 *
1763  ELSE
1764 *
1765 * .. ill-conditioned case: second QRF with pivoting
1766 * Note that windowed pivoting would be equaly good
1767 * numerically, and more run-time efficient. So, in
1768 * an optimal implementation, the next call to CGEQP3
1769 * should be replaced with eg. CALL CGEQPX (ACM TOMS #782)
1770 * with properly (carefully) chosen parameters.
1771 *
1772 * R1^* * P2 = Q2 * R2
1773  DO 3003 p = 1, nr
1774  iwork(n+p) = 0
1775  3003 CONTINUE
1776  CALL cgeqp3( n, nr, v, ldv, iwork(n+1), cwork(n+1),
1777  $ cwork(2*n+1), lwork-2*n, rwork, ierr )
1778 ** CALL CGEQRF( N, NR, V, LDV, CWORK(N+1), CWORK(2*N+1),
1779 ** $ LWORK-2*N, IERR )
1780  IF ( l2pert ) THEN
1781  xsc = sqrt(small)
1782  DO 3969 p = 2, nr
1783  DO 3968 q = 1, p - 1
1784  ctemp=cmplx(xsc*min(abs(v(p,p)),abs(v(q,q))),
1785  $ zero)
1786  IF ( abs(v(q,p)) .LE. temp1 )
1787 * $ V(q,p) = TEMP1 * ( V(q,p) / ABS(V(q,p)) )
1788  $ v(q,p) = ctemp
1789  3968 CONTINUE
1790  3969 CONTINUE
1791  END IF
1792 *
1793  CALL clacpy( 'A', n, nr, v, ldv, cwork(2*n+1), n )
1794 *
1795  IF ( l2pert ) THEN
1796  xsc = sqrt(small)
1797  DO 8970 p = 2, nr
1798  DO 8971 q = 1, p - 1
1799  ctemp=cmplx(xsc*min(abs(v(p,p)),abs(v(q,q))),
1800  $ zero)
1801 * V(p,q) = - TEMP1*( V(q,p) / ABS(V(q,p)) )
1802  v(p,q) = - ctemp
1803  8971 CONTINUE
1804  8970 CONTINUE
1805  ELSE
1806  CALL claset( 'L',nr-1,nr-1,czero,czero,v(2,1),ldv )
1807  END IF
1808 * Now, compute R2 = L3 * Q3, the LQ factorization.
1809  CALL cgelqf( nr, nr, v, ldv, cwork(2*n+n*nr+1),
1810  $ cwork(2*n+n*nr+nr+1), lwork-2*n-n*nr-nr, ierr )
1811 * .. and estimate the condition number
1812  CALL clacpy( 'L',nr,nr,v,ldv,cwork(2*n+n*nr+nr+1),nr )
1813  DO 4950 p = 1, nr
1814  temp1 = scnrm2( p, cwork(2*n+n*nr+nr+p), nr )
1815  CALL csscal( p, one/temp1, cwork(2*n+n*nr+nr+p), nr )
1816  4950 CONTINUE
1817  CALL cpocon( 'L',nr,cwork(2*n+n*nr+nr+1),nr,one,temp1,
1818  $ cwork(2*n+n*nr+nr+nr*nr+1),rwork,ierr )
1819  condr2 = one / sqrt(temp1)
1820 *
1821 *
1822  IF ( condr2 .GE. cond_ok ) THEN
1823 * .. save the Householder vectors used for Q3
1824 * (this overwrittes the copy of R2, as it will not be
1825 * needed in this branch, but it does not overwritte the
1826 * Huseholder vectors of Q2.).
1827  CALL clacpy( 'U', nr, nr, v, ldv, cwork(2*n+1), n )
1828 * .. and the rest of the information on Q3 is in
1829 * WORK(2*N+N*NR+1:2*N+N*NR+N)
1830  END IF
1831 *
1832  END IF
1833 *
1834  IF ( l2pert ) THEN
1835  xsc = sqrt(small)
1836  DO 4968 q = 2, nr
1837  ctemp = xsc * v(q,q)
1838  DO 4969 p = 1, q - 1
1839 * V(p,q) = - TEMP1*( V(p,q) / ABS(V(p,q)) )
1840  v(p,q) = - ctemp
1841  4969 CONTINUE
1842  4968 CONTINUE
1843  ELSE
1844  CALL claset( 'U', nr-1,nr-1, czero,czero, v(1,2), ldv )
1845  END IF
1846 *
1847 * Second preconditioning finished; continue with Jacobi SVD
1848 * The input matrix is lower trinagular.
1849 *
1850 * Recover the right singular vectors as solution of a well
1851 * conditioned triangular matrix equation.
1852 *
1853  IF ( condr1 .LT. cond_ok ) THEN
1854 *
1855  CALL cgesvj( 'L','U','N',nr,nr,v,ldv,sva,nr,u, ldu,
1856  $ cwork(2*n+n*nr+nr+1),lwork-2*n-n*nr-nr,rwork,
1857  $ lrwork, info )
1858  scalem = rwork(1)
1859  numrank = nint(rwork(2))
1860  DO 3970 p = 1, nr
1861  CALL ccopy( nr, v(1,p), 1, u(1,p), 1 )
1862  CALL csscal( nr, sva(p), v(1,p), 1 )
1863  3970 CONTINUE
1864 
1865 * .. pick the right matrix equation and solve it
1866 *
1867  IF ( nr .EQ. n ) THEN
1868 * :)) .. best case, R1 is inverted. The solution of this matrix
1869 * equation is Q2*V2 = the product of the Jacobi rotations
1870 * used in CGESVJ, premultiplied with the orthogonal matrix
1871 * from the second QR factorization.
1872  CALL ctrsm('L','U','N','N', nr,nr,cone, a,lda, v,ldv)
1873  ELSE
1874 * .. R1 is well conditioned, but non-square. Adjoint of R2
1875 * is inverted to get the product of the Jacobi rotations
1876 * used in CGESVJ. The Q-factor from the second QR
1877 * factorization is then built in explicitly.
1878  CALL ctrsm('L','U','C','N',nr,nr,cone,cwork(2*n+1),
1879  $ n,v,ldv)
1880  IF ( nr .LT. n ) THEN
1881  CALL claset('A',n-nr,nr,czero,czero,v(nr+1,1),ldv)
1882  CALL claset('A',nr,n-nr,czero,czero,v(1,nr+1),ldv)
1883  CALL claset('A',n-nr,n-nr,czero,cone,v(nr+1,nr+1),ldv)
1884  END IF
1885  CALL cunmqr('L','N',n,n,nr,cwork(2*n+1),n,cwork(n+1),
1886  $ v,ldv,cwork(2*n+n*nr+nr+1),lwork-2*n-n*nr-nr,ierr)
1887  END IF
1888 *
1889  ELSE IF ( condr2 .LT. cond_ok ) THEN
1890 *
1891 * The matrix R2 is inverted. The solution of the matrix equation
1892 * is Q3^* * V3 = the product of the Jacobi rotations (appplied to
1893 * the lower triangular L3 from the LQ factorization of
1894 * R2=L3*Q3), pre-multiplied with the transposed Q3.
1895  CALL cgesvj( 'L', 'U', 'N', nr, nr, v, ldv, sva, nr, u,
1896  $ ldu, cwork(2*n+n*nr+nr+1), lwork-2*n-n*nr-nr,
1897  $ rwork, lrwork, info )
1898  scalem = rwork(1)
1899  numrank = nint(rwork(2))
1900  DO 3870 p = 1, nr
1901  CALL ccopy( nr, v(1,p), 1, u(1,p), 1 )
1902  CALL csscal( nr, sva(p), u(1,p), 1 )
1903  3870 CONTINUE
1904  CALL ctrsm('L','U','N','N',nr,nr,cone,cwork(2*n+1),n,
1905  $ u,ldu)
1906 * .. apply the permutation from the second QR factorization
1907  DO 873 q = 1, nr
1908  DO 872 p = 1, nr
1909  cwork(2*n+n*nr+nr+iwork(n+p)) = u(p,q)
1910  872 CONTINUE
1911  DO 874 p = 1, nr
1912  u(p,q) = cwork(2*n+n*nr+nr+p)
1913  874 CONTINUE
1914  873 CONTINUE
1915  IF ( nr .LT. n ) THEN
1916  CALL claset( 'A',n-nr,nr,czero,czero,v(nr+1,1),ldv )
1917  CALL claset( 'A',nr,n-nr,czero,czero,v(1,nr+1),ldv )
1918  CALL claset('A',n-nr,n-nr,czero,cone,v(nr+1,nr+1),ldv)
1919  END IF
1920  CALL cunmqr( 'L','N',n,n,nr,cwork(2*n+1),n,cwork(n+1),
1921  $ v,ldv,cwork(2*n+n*nr+nr+1),lwork-2*n-n*nr-nr,ierr )
1922  ELSE
1923 * Last line of defense.
1924 * #:( This is a rather pathological case: no scaled condition
1925 * improvement after two pivoted QR factorizations. Other
1926 * possibility is that the rank revealing QR factorization
1927 * or the condition estimator has failed, or the COND_OK
1928 * is set very close to ONE (which is unnecessary). Normally,
1929 * this branch should never be executed, but in rare cases of
1930 * failure of the RRQR or condition estimator, the last line of
1931 * defense ensures that CGEJSV completes the task.
1932 * Compute the full SVD of L3 using CGESVJ with explicit
1933 * accumulation of Jacobi rotations.
1934  CALL cgesvj( 'L', 'U', 'V', nr, nr, v, ldv, sva, nr, u,
1935  $ ldu, cwork(2*n+n*nr+nr+1), lwork-2*n-n*nr-nr,
1936  $ rwork, lrwork, info )
1937  scalem = rwork(1)
1938  numrank = nint(rwork(2))
1939  IF ( nr .LT. n ) THEN
1940  CALL claset( 'A',n-nr,nr,czero,czero,v(nr+1,1),ldv )
1941  CALL claset( 'A',nr,n-nr,czero,czero,v(1,nr+1),ldv )
1942  CALL claset('A',n-nr,n-nr,czero,cone,v(nr+1,nr+1),ldv)
1943  END IF
1944  CALL cunmqr( 'L','N',n,n,nr,cwork(2*n+1),n,cwork(n+1),
1945  $ v,ldv,cwork(2*n+n*nr+nr+1),lwork-2*n-n*nr-nr,ierr )
1946 *
1947  CALL cunmlq( 'L', 'C', nr, nr, nr, cwork(2*n+1), n,
1948  $ cwork(2*n+n*nr+1), u, ldu, cwork(2*n+n*nr+nr+1),
1949  $ lwork-2*n-n*nr-nr, ierr )
1950  DO 773 q = 1, nr
1951  DO 772 p = 1, nr
1952  cwork(2*n+n*nr+nr+iwork(n+p)) = u(p,q)
1953  772 CONTINUE
1954  DO 774 p = 1, nr
1955  u(p,q) = cwork(2*n+n*nr+nr+p)
1956  774 CONTINUE
1957  773 CONTINUE
1958 *
1959  END IF
1960 *
1961 * Permute the rows of V using the (column) permutation from the
1962 * first QRF. Also, scale the columns to make them unit in
1963 * Euclidean norm. This applies to all cases.
1964 *
1965  temp1 = sqrt(REAL(n)) * epsln
1966  DO 1972 q = 1, n
1967  DO 972 p = 1, n
1968  cwork(2*n+n*nr+nr+iwork(p)) = v(p,q)
1969  972 CONTINUE
1970  DO 973 p = 1, n
1971  v(p,q) = cwork(2*n+n*nr+nr+p)
1972  973 CONTINUE
1973  xsc = one / scnrm2( n, v(1,q), 1 )
1974  IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
1975  $ CALL csscal( n, xsc, v(1,q), 1 )
1976  1972 CONTINUE
1977 * At this moment, V contains the right singular vectors of A.
1978 * Next, assemble the left singular vector matrix U (M x N).
1979  IF ( nr .LT. m ) THEN
1980  CALL claset('A', m-nr, nr, czero, czero, u(nr+1,1), ldu)
1981  IF ( nr .LT. n1 ) THEN
1982  CALL claset('A',nr,n1-nr,czero,czero,u(1,nr+1),ldu)
1983  CALL claset('A',m-nr,n1-nr,czero,cone,
1984  $ u(nr+1,nr+1),ldu)
1985  END IF
1986  END IF
1987 *
1988 * The Q matrix from the first QRF is built into the left singular
1989 * matrix U. This applies to all cases.
1990 *
1991  CALL cunmqr( 'L', 'N', m, n1, n, a, lda, cwork, u,
1992  $ ldu, cwork(n+1), lwork-n, ierr )
1993 
1994 * The columns of U are normalized. The cost is O(M*N) flops.
1995  temp1 = sqrt(REAL(m)) * epsln
1996  DO 1973 p = 1, nr
1997  xsc = one / scnrm2( m, u(1,p), 1 )
1998  IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
1999  $ CALL csscal( m, xsc, u(1,p), 1 )
2000  1973 CONTINUE
2001 *
2002 * If the initial QRF is computed with row pivoting, the left
2003 * singular vectors must be adjusted.
2004 *
2005  IF ( rowpiv )
2006  $ CALL claswp( n1, u, ldu, 1, m-1, iwork(iwoff+1), -1 )
2007 *
2008  ELSE
2009 *
2010 * .. the initial matrix A has almost orthogonal columns and
2011 * the second QRF is not needed
2012 *
2013  CALL clacpy( 'U', n, n, a, lda, cwork(n+1), n )
2014  IF ( l2pert ) THEN
2015  xsc = sqrt(small)
2016  DO 5970 p = 2, n
2017  ctemp = xsc * cwork( n + (p-1)*n + p )
2018  DO 5971 q = 1, p - 1
2019 * CWORK(N+(q-1)*N+p)=-TEMP1 * ( CWORK(N+(p-1)*N+q) /
2020 * $ ABS(CWORK(N+(p-1)*N+q)) )
2021  cwork(n+(q-1)*n+p)=-ctemp
2022  5971 CONTINUE
2023  5970 CONTINUE
2024  ELSE
2025  CALL claset( 'L',n-1,n-1,czero,czero,cwork(n+2),n )
2026  END IF
2027 *
2028  CALL cgesvj( 'U', 'U', 'N', n, n, cwork(n+1), n, sva,
2029  $ n, u, ldu, cwork(n+n*n+1), lwork-n-n*n, rwork, lrwork,
2030  $ info )
2031 *
2032  scalem = rwork(1)
2033  numrank = nint(rwork(2))
2034  DO 6970 p = 1, n
2035  CALL ccopy( n, cwork(n+(p-1)*n+1), 1, u(1,p), 1 )
2036  CALL csscal( n, sva(p), cwork(n+(p-1)*n+1), 1 )
2037  6970 CONTINUE
2038 *
2039  CALL ctrsm( 'L', 'U', 'N', 'N', n, n,
2040  $ cone, a, lda, cwork(n+1), n )
2041  DO 6972 p = 1, n
2042  CALL ccopy( n, cwork(n+p), n, v(iwork(p),1), ldv )
2043  6972 CONTINUE
2044  temp1 = sqrt(REAL(n))*epsln
2045  DO 6971 p = 1, n
2046  xsc = one / scnrm2( n, v(1,p), 1 )
2047  IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
2048  $ CALL csscal( n, xsc, v(1,p), 1 )
2049  6971 CONTINUE
2050 *
2051 * Assemble the left singular vector matrix U (M x N).
2052 *
2053  IF ( n .LT. m ) THEN
2054  CALL claset( 'A', m-n, n, czero, czero, u(n+1,1), ldu )
2055  IF ( n .LT. n1 ) THEN
2056  CALL claset('A',n, n1-n, czero, czero, u(1,n+1),ldu)
2057  CALL claset( 'A',m-n,n1-n, czero, cone,u(n+1,n+1),ldu)
2058  END IF
2059  END IF
2060  CALL cunmqr( 'L', 'N', m, n1, n, a, lda, cwork, u,
2061  $ ldu, cwork(n+1), lwork-n, ierr )
2062  temp1 = sqrt(REAL(m))*epsln
2063  DO 6973 p = 1, n1
2064  xsc = one / scnrm2( m, u(1,p), 1 )
2065  IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
2066  $ CALL csscal( m, xsc, u(1,p), 1 )
2067  6973 CONTINUE
2068 *
2069  IF ( rowpiv )
2070  $ CALL claswp( n1, u, ldu, 1, m-1, iwork(iwoff+1), -1 )
2071 *
2072  END IF
2073 *
2074 * end of the >> almost orthogonal case << in the full SVD
2075 *
2076  ELSE
2077 *
2078 * This branch deploys a preconditioned Jacobi SVD with explicitly
2079 * accumulated rotations. It is included as optional, mainly for
2080 * experimental purposes. It does perfom well, and can also be used.
2081 * In this implementation, this branch will be automatically activated
2082 * if the condition number sigma_max(A) / sigma_min(A) is predicted
2083 * to be greater than the overflow threshold. This is because the
2084 * a posteriori computation of the singular vectors assumes robust
2085 * implementation of BLAS and some LAPACK procedures, capable of working
2086 * in presence of extreme values, e.g. when the singular values spread from
2087 * the underflow to the overflow threshold.
2088 *
2089  DO 7968 p = 1, nr
2090  CALL ccopy( n-p+1, a(p,p), lda, v(p,p), 1 )
2091  CALL clacgv( n-p+1, v(p,p), 1 )
2092  7968 CONTINUE
2093 *
2094  IF ( l2pert ) THEN
2095  xsc = sqrt(small/epsln)
2096  DO 5969 q = 1, nr
2097  ctemp = cmplx(xsc*abs( v(q,q) ),zero)
2098  DO 5968 p = 1, n
2099  IF ( ( p .GT. q ) .AND. ( abs(v(p,q)) .LE. temp1 )
2100  $ .OR. ( p .LT. q ) )
2101 * $ V(p,q) = TEMP1 * ( V(p,q) / ABS(V(p,q)) )
2102  $ v(p,q) = ctemp
2103  IF ( p .LT. q ) v(p,q) = - v(p,q)
2104  5968 CONTINUE
2105  5969 CONTINUE
2106  ELSE
2107  CALL claset( 'U', nr-1, nr-1, czero, czero, v(1,2), ldv )
2108  END IF
2109 
2110  CALL cgeqrf( n, nr, v, ldv, cwork(n+1), cwork(2*n+1),
2111  $ lwork-2*n, ierr )
2112  CALL clacpy( 'L', n, nr, v, ldv, cwork(2*n+1), n )
2113 *
2114  DO 7969 p = 1, nr
2115  CALL ccopy( nr-p+1, v(p,p), ldv, u(p,p), 1 )
2116  CALL clacgv( nr-p+1, u(p,p), 1 )
2117  7969 CONTINUE
2118 
2119  IF ( l2pert ) THEN
2120  xsc = sqrt(small/epsln)
2121  DO 9970 q = 2, nr
2122  DO 9971 p = 1, q - 1
2123  ctemp = cmplx(xsc * min(abs(u(p,p)),abs(u(q,q))),
2124  $ zero)
2125 * U(p,q) = - TEMP1 * ( U(q,p) / ABS(U(q,p)) )
2126  u(p,q) = - ctemp
2127  9971 CONTINUE
2128  9970 CONTINUE
2129  ELSE
2130  CALL claset('U', nr-1, nr-1, czero, czero, u(1,2), ldu )
2131  END IF
2132 
2133  CALL cgesvj( 'L', 'U', 'V', nr, nr, u, ldu, sva,
2134  $ n, v, ldv, cwork(2*n+n*nr+1), lwork-2*n-n*nr,
2135  $ rwork, lrwork, info )
2136  scalem = rwork(1)
2137  numrank = nint(rwork(2))
2138 
2139  IF ( nr .LT. n ) THEN
2140  CALL claset( 'A',n-nr,nr,czero,czero,v(nr+1,1),ldv )
2141  CALL claset( 'A',nr,n-nr,czero,czero,v(1,nr+1),ldv )
2142  CALL claset( 'A',n-nr,n-nr,czero,cone,v(nr+1,nr+1),ldv )
2143  END IF
2144 
2145  CALL cunmqr( 'L','N',n,n,nr,cwork(2*n+1),n,cwork(n+1),
2146  $ v,ldv,cwork(2*n+n*nr+nr+1),lwork-2*n-n*nr-nr,ierr )
2147 *
2148 * Permute the rows of V using the (column) permutation from the
2149 * first QRF. Also, scale the columns to make them unit in
2150 * Euclidean norm. This applies to all cases.
2151 *
2152  temp1 = sqrt(REAL(n)) * epsln
2153  DO 7972 q = 1, n
2154  DO 8972 p = 1, n
2155  cwork(2*n+n*nr+nr+iwork(p)) = v(p,q)
2156  8972 CONTINUE
2157  DO 8973 p = 1, n
2158  v(p,q) = cwork(2*n+n*nr+nr+p)
2159  8973 CONTINUE
2160  xsc = one / scnrm2( n, v(1,q), 1 )
2161  IF ( (xsc .LT. (one-temp1)) .OR. (xsc .GT. (one+temp1)) )
2162  $ CALL csscal( n, xsc, v(1,q), 1 )
2163  7972 CONTINUE
2164 *
2165 * At this moment, V contains the right singular vectors of A.
2166 * Next, assemble the left singular vector matrix U (M x N).
2167 *
2168  IF ( nr .LT. m ) THEN
2169  CALL claset( 'A', m-nr, nr, czero, czero, u(nr+1,1), ldu )
2170  IF ( nr .LT. n1 ) THEN
2171  CALL claset('A',nr, n1-nr, czero, czero, u(1,nr+1),ldu)
2172  CALL claset('A',m-nr,n1-nr, czero, cone,u(nr+1,nr+1),ldu)
2173  END IF
2174  END IF
2175 *
2176  CALL cunmqr( 'L', 'N', m, n1, n, a, lda, cwork, u,
2177  $ ldu, cwork(n+1), lwork-n, ierr )
2178 *
2179  IF ( rowpiv )
2180  $ CALL claswp( n1, u, ldu, 1, m-1, iwork(iwoff+1), -1 )
2181 *
2182 *
2183  END IF
2184  IF ( transp ) THEN
2185 * .. swap U and V because the procedure worked on A^*
2186  DO 6974 p = 1, n
2187  CALL cswap( n, u(1,p), 1, v(1,p), 1 )
2188  6974 CONTINUE
2189  END IF
2190 *
2191  END IF
2192 * end of the full SVD
2193 *
2194 * Undo scaling, if necessary (and possible)
2195 *
2196  IF ( uscal2 .LE. (big/sva(1))*uscal1 ) THEN
2197  CALL slascl( 'G', 0, 0, uscal1, uscal2, nr, 1, sva, n, ierr )
2198  uscal1 = one
2199  uscal2 = one
2200  END IF
2201 *
2202  IF ( nr .LT. n ) THEN
2203  DO 3004 p = nr+1, n
2204  sva(p) = zero
2205  3004 CONTINUE
2206  END IF
2207 *
2208  rwork(1) = uscal2 * scalem
2209  rwork(2) = uscal1
2210  IF ( errest ) rwork(3) = sconda
2211  IF ( lsvec .AND. rsvec ) THEN
2212  rwork(4) = condr1
2213  rwork(5) = condr2
2214  END IF
2215  IF ( l2tran ) THEN
2216  rwork(6) = entra
2217  rwork(7) = entrat
2218  END IF
2219 *
2220  iwork(1) = nr
2221  iwork(2) = numrank
2222  iwork(3) = warning
2223  IF ( transp ) THEN
2224  iwork(4) = 1
2225  ELSE
2226  iwork(4) = -1
2227  END IF
2228 
2229 *
2230  RETURN
2231 * ..
2232 * .. END OF CGEJSV
2233 * ..
2234  END
2235 *
subroutine cgesvj(JOBA, JOBU, JOBV, M, N, A, LDA, SVA, MV, V, LDV, CWORK, LWORK, RWORK, LRWORK, INFO)
CGESVJ
Definition: cgesvj.f:351
subroutine classq(N, X, INCX, SCALE, SUMSQ)
CLASSQ updates a sum of squares represented in scaled form.
Definition: classq.f:108
subroutine claset(UPLO, M, N, ALPHA, BETA, A, LDA)
CLASET initializes the off-diagonal elements and the diagonal elements of a matrix to given values...
Definition: claset.f:108
subroutine claswp(N, A, LDA, K1, K2, IPIV, INCX)
CLASWP performs a series of row interchanges on a general rectangular matrix.
Definition: claswp.f:116
subroutine cungqr(M, N, K, A, LDA, TAU, WORK, LWORK, INFO)
CUNGQR
Definition: cungqr.f:130
subroutine clapmr(FORWRD, M, N, X, LDX, K)
CLAPMR rearranges rows of a matrix as specified by a permutation vector.
Definition: clapmr.f:106
subroutine clascl(TYPE, KL, KU, CFROM, CTO, M, N, A, LDA, INFO)
CLASCL multiplies a general rectangular matrix by a real scalar defined as cto/cfrom.
Definition: clascl.f:145
subroutine cpocon(UPLO, N, A, LDA, ANORM, RCOND, WORK, RWORK, INFO)
CPOCON
Definition: cpocon.f:123
subroutine ctrsm(SIDE, UPLO, TRANSA, DIAG, M, N, ALPHA, A, LDA, B, LDB)
CTRSM
Definition: ctrsm.f:182
subroutine cgelqf(M, N, A, LDA, TAU, WORK, LWORK, INFO)
CGELQF
Definition: cgelqf.f:137
subroutine cgeqp3(M, N, A, LDA, JPVT, TAU, WORK, LWORK, RWORK, INFO)
CGEQP3
Definition: cgeqp3.f:161
subroutine xerbla(SRNAME, INFO)
XERBLA
Definition: xerbla.f:62
subroutine cgeqrf(M, N, A, LDA, TAU, WORK, LWORK, INFO)
CGEQRF
Definition: cgeqrf.f:138
subroutine cgejsv(JOBA, JOBU, JOBV, JOBR, JOBT, JOBP, M, N, A, LDA, SVA, U, LDU, V, LDV, CWORK, LWORK, RWORK, LRWORK, IWORK, INFO)
CGEJSV
Definition: cgejsv.f:570
subroutine slascl(TYPE, KL, KU, CFROM, CTO, M, N, A, LDA, INFO)
SLASCL multiplies a general rectangular matrix by a real scalar defined as cto/cfrom.
Definition: slascl.f:145
subroutine clacpy(UPLO, M, N, A, LDA, B, LDB)
CLACPY copies all or part of one two-dimensional array to another.
Definition: clacpy.f:105
subroutine clacgv(N, X, INCX)
CLACGV conjugates a complex vector.
Definition: clacgv.f:76
subroutine cunmqr(SIDE, TRANS, M, N, K, A, LDA, TAU, C, LDC, WORK, LWORK, INFO)
CUNMQR
Definition: cunmqr.f:170
subroutine sscal(N, SA, SX, INCX)
SSCAL
Definition: sscal.f:55
subroutine ccopy(N, CX, INCX, CY, INCY)
CCOPY
Definition: ccopy.f:52
subroutine cswap(N, CX, INCX, CY, INCY)
CSWAP
Definition: cswap.f:52
subroutine cunmlq(SIDE, TRANS, M, N, K, A, LDA, TAU, C, LDC, WORK, LWORK, INFO)
CUNMLQ
Definition: cunmlq.f:170
subroutine slassq(N, X, INCX, SCALE, SUMSQ)
SLASSQ updates a sum of squares represented in scaled form.
Definition: slassq.f:105
subroutine csscal(N, SA, CX, INCX)
CSSCAL
Definition: csscal.f:54