================================================= NUMERICAL ANALYSIS TITLES -- Volume 2, Number 3 September, 1984 ================================================= [1] R. C. Y. Chin, G. W. Hedstrom, and F. A. Howes Considerations on Solving Problems with Multiple Scales Lawrence Livermore National Laboratory Report UCRL-90791 (No abstract given.) Submitted by: Gerald Hedstrom, L-387 Obtainable from: Gerald Hedstrom, L-387 P. O. Box 808 LLNL Livermore, California 94550 (na.hedstrom@su-score.arpa) --------------- [2] R. C. Y. Chin, G. W. Hedstrom, and Lewis Thigpen Numerical methods in seismology J. Comput. Phys. 54 (1984), 18-56. (No abstract given.) Submitted by: Gerald Hedstrom, L-387 Obtainable from: Gerald Hedstrom, L-387 P. O. Box 808 LLNL Livermore, California 94550 (na.hedstrom@su-score.arpa) --------------- [3] R. C. Y. Chin, G. W. Hedstrom, and Lewis Thigpen Matrix methods in synthetic seismograms Geophys. J. R. astron. Soc. 77 (1984), 483-502. Submitted by: Gerald Hedstrom, L-387 Obtainable from: Gerald Hedstrom, L-387 P. O. Box 808 LLNL Livermore, California 94550 (na.hedstrom@su-score.arpa) --------------- [4] W. H. Enright and J. D. Pryce Two Fortran Packages for Assessing Initial Value Methods University of Toronto Computer Science Department TR 167/83 (No abstract given.) Submitted by: Kenneth R. Jackson Obtainable from: USENET {decvax linus ihnp4 allegra floyd decwrl garfield qucis cornell mcgill-vision sask watmath uw-beaver ubc-vision}!utcsrgv!enright or CSNET enright@toronto --------------- [5] Allan D. Jepson and Alastair Spence The Numerical Solution of Nonlinear Equations having Several Parameters University of Toronto Computer Science Department TR 168/83 (No abstract submitted.) Submitted by: Kenneth R. Jackson Obtainable from: USENET { decvax linus ihnp4 allegra floyd decwrl garfield qucis cornell mcgill-vision sask watmath uw-beaver ubc-vision }!utcsrgv!jepson or CSNET jepson@toronto --------------- [6] Tony F. Chan and Kenneth R. Jackson The use of Iterative Linear-Equation Solvers in Codes for Large Systems of Stiff IVPs for ODEs University of Toronto Computer Science Department TR 170/84. (No abstract given.) Submitted by: Kenneth R. Jackson Obtainable from: USENET { decvax linus ihnp4 allegra floyd decwrl garfield qucis cornell mcgill-vision sask watmath uw-beaver ubc-vision }!utcsrgv!krj CSNET krj@toronto --------------- [7] Robert Schreiber Computing Generalized Inverses and Eigenvalues of Symmetric Matrices Using Systolic Arrays. Stanford NA Manuscript NA-83-03. New systolic array methods are presented for tridiagonalization of a dense, symmetric n x n matrix in O(n**3/2) time, tridiagonalization of a matrix of bandwidth b in O(bn) time, and finding the eigenvalues of a symmetric, tridiagonal matrix in O(n) time. Systolic arrays for implementing an iterative method for the generalized inverse are given. They allow computation of the generalized inverse of an m x n matrix in O(m) time. Submitted by: Robert Schreiber Obtainable from: Robert Schreiber ESL Inc. 495 Java Drive, Sunnyvale, CA 94086. --------------- [8] Robert Schreiber Implementation of Adaptive Array Algorithms Stanford NA Manuscript NA-84-28. The computational procedures used to implement some standard and some recently proposed adaptive methods for direction finding and beamforming by sensor arrays are considered. Among these methods are the minimum variance beamforming method and several high-resolution methods that are based on the covariance matrix of the signal. We discuss recursive implementation of these procedures: the covariance matrix is periodically updated by the addition of a matrix of rank one. We propose some efficient, numerically stable algorithms for updating the solution. Submitted by: Robert Schreiber Obtainable from: Robert Schreiber ESL Inc. 495 Java Drive, Sunnyvale, CA 94086. --------------- [9] S. S. Chen, J. J. Dongarra, and C. C. Hsiung Multiprocessing Linear Algebra Algorithms on the CRAY X-MP-2: Experiences with Small Granularity Argonne National Laboratory Report MCS-TM-24 (February 1984) This paper gives an overview of the CRAY X-MP-2 general-purpose multiprocessor system and discusses how it can be used effective- ly to solve problems that have small granularity. An implementa- tion is described for linear algebra algorithms that solve sys- tems of linear equations when the matrix is general and when the matrix is symmetric and positive definite. Submitted by: Jack J. Dongarra Obtainable from: Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 --------------- [10] J. J. Dongarra Performance of Various Computers Using Standard Linear Equations Software in a Fortran Environment Argonne National Laboratory Report MCS-TM-23 (updated August 1984) This note compares the performance of different computer systems while solving dense systems of linear equations using the LINPACK software in a Fortran environment. About 50 computers, ranging from a CRAY X-MP to the 68000-based systems such as the Apollo and SUN workstations, are compared. Submitted by: Jack J. Dongarra Obtainable from: Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 --------------- [11] J. J. Dongarra and A. Sameh On Some Parallel Banded System Solvers Argonne National Laboratory Report MCS-TM-27 (March 1984) This paper describes algorithms for solving narrow banded systems and the Helmholtz difference equations that are suitable for mul- tiprocessing systems. The organization of the algorithms highlight the large grain parallelism inherent in the problems. Submitted by: Jack J. Dongarra Obtainable from: Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 --------------- [12] J.J. Dongarra, E.L. Lusk, R.A. Overbeek, B.T. Smith, and D.C. Sorensen New Directions in Software for Advanced Computer Architectures Argonne National Laboratory Report MCS/TM 32 (August, 1984) This report contains a collection of short position papers that were prepared for the Workshop on Taxonomy of Parallel Algorithms sponsored by Los Alamos National Laboratory and held at Santa Fe, New Mexico Nov. 30 - Dec. 2, 1983. These papers represent the authors views on directions of research that should be undertaken with regards to programming methodology, algorithm design, and software implementation for advanced computer architectures. Several projects have been initiated by the authors within the general framework outlined in these articles since they were written. The concepts formulated here are expected to serve as a foundation for future work in multiprocessing. Submitted by: Jack J. Dongarra Obtainable from: Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 --------------- [13] D. C. Sorensen Analysis of Pairwise Pivoting in Gaussian Elimination Argonne National Laboratory Report MCS-TM-26 (February 1984) The method of Gaussian Elimination using triangularization by elementary stabilized matrices constructed by pairwise pivoting is analyzed. It is shown that a variant of this scheme which is suitable for implementation on a parallel computer is numerically stable although the bound is larger than the one for the standard partial pivoting algorithm. Submitted by: Jack J. Dongarra Obtainable from: Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 --------------- [14] D. C. Sorensen Buffering for Vector Performance on a Pipelined MIMD Machine Argonne National Laboratory Report MCS-TM-29 (April 1984) A technique is presented for obtaining vector performance from a pipelined MIMD computer that does not have hardwired vector in- structions. The specific computer in mind is the Denelcor HEP, but the technique might influence the use and possibly even the design of future machines with this type of architecture. This preliminary report presents the basic idea and demonstrates that it can be implemented. Buffering blocks of data to registers is used in conjunction with pipelined floating point operations to achieve vector performance. Empirical evidence is presented to show that up to 5.8 megaflop performance is possible from the Denelcor HEP on very regular tasks such as matrix vector pro- ducts. While this rate is not in the "super-computer" range, it is certainly respectable given the hardware capabilities of the HEP (this machine is rated at 10 MIPS peak). This performance indicates that an apparently minor refinement to the architectur- al design could provide very efficient vector operations in addi- tion to the parallelism and low-overhead synchronization already offered by the HEP architecture. Submitted by: Jack J. Dongarra Obtainable from: Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 --------------- [15] Thomas F.Coleman and Alex Pothen The Sparse Null Space Problem Tecnical Report:TR 84-598 Computer Science Cornell University The sparse null space basis problem is the following. Let A be a given t*n matrix, t < n, of rank t. Find a matrix N, with the fewest nonzeroes in it, whose columns span the null space of A. This problem arises in the design of practical algorithms for large scale numerical optimization problems. In this paper we analyze the complexity of this problem, using combinatorial tools, and we propose approximation algorithms. Submitted by: Thomas F. Coleman Obtainable from: Thomas F.Coleman Computer Science Department Cornell University Ithace NY 14853 (coleman@cornell.csnet) --------------- [16] Richard H. Byrd An Example of Irregular Convergence in Some Constrained Optimization Methods That Use the Projected Hessian In this paper we give examples illustrating the behavior of the Coleman- Conn horizontal vertical method and of successive quadratic programming with a Hessian approximation exact on the tangent space of the constraints. One example shows that these methods in general are not one-step superlinearly convergent. Submitted by: Richard H. Byrd Obtainable from: Richard H. Byrd Department of Computer Science University of Colorado Boulder, Colorado 80309 --------------- [17] Richard H. Byrd On the Convergence of Constrained Optimization Methods with Accurate Hessian Information on a Subspace In this paper we analyze methods of the type proposed by Coleman and Conn for nonlinearly constrained optimization. We show that if the reduced Hessian approximation is sufficiently accurate, then the method generates a sequence of iterates which converges one-step superlinearly. This result applies to a quasi-Newton implementation. If the reduced exact Hessian is used the method has an R-order equal to that of the secant method. We also prove a similar result for a modified version of successive quadratic programming. Finally some parallels between convergence results for methods that approximate the reduced Hessian, and multiplier methods that use the reduced Hessian inverse are pointed out. Submitted by: Richard H. Byrd Obtainable from: Richard H. Byrd Department of Computer Science University of Colorado Boulder, Colorado 80309 --------------- [18] G. H. Golub, Pierre Manneback, and Phillippe Toint A COMPARISON BETWEEN SOME DIRECT AND ITERATIVE METHODS FOR LARGE SCALE GEODETIC LEAST SQUARES PROBLEMS The purpose of this paper is to describe and compare some numerical methods for solving large dimensional linear least squares problems that arise in geodesy and, more specially, from Doppler posi- tioning. The methods that are considered are the direct orthogonal decompo- sition, and the combination of conjugate gradient type algorithms with projects as well as the exploitatioin of "Property A". Numerical results are given and the respective advantage of thef methods are discussed with respect to such parameters as CPLU time, input/output and storage require- ments. Extensions of the results to more general problems are also discussed. Submitted by: G. H. Golub Obtainable from: G. H. Golub Stanford University Computer Science Department Stanford, CA 94305 (golub@navajo.arpa) --------------- [19] Franklin T. Luk A Jacobi-like Method for Computing the QR-decomposition Technical Report 84-612 Department of Computer Science Cornell University A parallel Jacobi-like method for computing the QR-decomposition of an nxn matrix is proposed. It requires O(n**2) processors and O(n) units of time. The method can be extended to handle an mxn matrix (m>=n). The requirements become O(n**2) processors and O(m) time. Submitted by: Franklin T. Luk Obtainable from: Librarian Department of Computer Science Cornell University Ithaca, NY 14853 --------------- [20] Franklin T. Luk A Triangular Processor Array for Computing the Singular Value Decomposition. Technical Report 84-625 Department of Computer Science Cornell University A triangular processor array for computing a singular value decomposition (SVD) of an mxn (m>=n) matrix is proposed. A Jacobi-type method is used to first triangularize the given matrix and then diagonalize the resultant triangular form. The requirements are O(m) time and n**2/4 + O(n) processors. Submitted by: Franklin T. Luk Obtainable from: Librarian Department of Computer Science Cornell University Ithaca, NY 14853 --------------- [21] Richard P. Brent and Franklin T. Luk The Solution of Singular Value Problems Using Systolic Arrays. Technical Report 84-626 Department of Computer Science Cornell University This paper concerns the computation of the singular value decomposition using systolic arrays. Two different linear time algorithms are presented. Submitted by: Franklin T. Luk Obtainable from: Librarian Department of Computer Science Cornell University Ithaca, NY 14853 --------------- .