A+B
A-B
A*B A.*B
A/B A./B
A\B A.\B
A^B A.^B
A' A.'
.+
and .-
are not used.
+
A+B
adds A
and B
. A
and B
must have the same dimensions, unless one is a scalar. A scalar can be added to a matrix of any dimension.
-
A-B
subtracts B
from A
. A
and B
must have the same dimensions, unless one is a scalar. A scalar can be subtracted from a matrix of any dimension.
*
A*B
is the linear algebraic product of the matrices A
and B
. The number of columns of A
must equal the number of rows of B
, unless one of them is a scalar. A scalar can multiply a matrix of any dimension.
.*
A.*B
is the element-by-element product of the arrays A
and B
. A
and B
must have the same dimension, unless one of them is a scalar.
\
A
is a square matrix, A\B
is roughly the same as inv(A)*B
, except it is computed in a different way. If A
is an n
-by-n
matrix and B
is a column vector with n
components, or a matrix with several such columns, then X = A\B
is the solution to the equation AX = B computed by Gaussian elimination (see "Algorithm" for details). A warning message prints if A
is badly scaled or nearly singular.
A
is an m
-by-n
matrix with m ~= n
and B
is a column vector with m
components, or a matrix with several such columns, then X = A\B
is the solution in the least squares sense to the under- or overdetermined system of equations AX = B. The effective rank,
k, of A
, is determined from the QR decomposition with pivoting (see "Algorithm" for details). A solution X
is computed which has at most
k nonzero components per column. If k < n
, this is usually not be the same solution as pinv(A)*B
, which is the least squares solution with the smallest residual norm, ||AX-B||..\
A.\B
is the matrix with elements B(i,j)/A(i,j)
. A
and B
must have the same dimensions, unless one of them is a scalar.
/
B/A
is roughly the same as B*inv(A)
. More precisely, B/A = (A'\B')'
. See \
.
./
A./B
is the matrix with elements A(i,j)/B(i,j)
. A
and B
must have the same dimensions, unless one of them is a scalar.
^
X^p
is X
to the power p
, if p is a scalar. If p
is an integer, the power is computed by repeated multiplication. If the integer is negative, X
is inverted first. For other values of p
, the calculation involves eigenvalues and eigenvectors, such that if [V,D] = eig(X)
, then X^p = V*D.^p/V
.
x
is a scalar and P
is a matrix, x^P
is x
raised to the matrix power P
using eigenvalues and eigenvectors. X^P
, where X
and P
are both matrices, is an error..^
A.^B
is the matrix with elements A(i,j)
to the B(i,j)
power. A
and B
must have the same dimensions, unless one of them is a scalar.
'
A'
is the linear algebraic transpose of A
. For complex matrices, this involves the complex conjugate transpose.
.'
A.'
is the array transpose of A
. For complex matrices, this does not involve conjugation.
format
rat
.
---------------------------------------------------------------------------------------- Matrix Operations Array Operations ---------------------------------------------------------------------------------------- x 1 y 4 2 5 3 6 x' 1 2 3 y' 4 5 6 x+y 5 x-y -3 7 -3 9 -3 x + 2 3 x-2 -1 4 0 5 1 x * y Error x.*y 4 10 18 x'*y 32 x'.*y Error x*y' 4 5 6 x.*y' Error 8 10 12 12 15 18 x*2 2 x.*2 2 4 4 6 6 x\y 16/7 x.\y 4 5/2 2 2\x 1/2 2./x 2 1 1 3/2 2/3 x/y 0 0 1/6 x./y 1/4 0 0 1/3 2/5 0 0 1/2 1/2 x/2 1/2 x./2 1/2 1 1 3/2 3/2 x^y Error x.^y 1 32 729 x^2 Error x.^2 1 4 9 2^x Error 2.^x 2 4 8 (x+i*y)' 1 - 4i 2 - 5i 3 - 6i (x+i*y).' 1 + 4i 2 + 5i 3 + 6i ----------------------------------------------------------------------------------------
X = A\B
and X = B/A
depends upon the structure of the coefficient matrix A
.
A
is a triangular matrix, or a permutation of a triangular matrix, then X
can be computed quickly by a permuted backsubstitution algorithm. The check for triangularity is done for full matrices by testing for zero elements and for sparse matrices by accessing the sparse data structure. Most nontriangular matrices are detected almost immediately, so this check requires a negligible amount of time.A
is symmetric, or Hermitian, and has positive diagonal elements, then a Cholesky factorization is attempted (see chol
). If A
is sparse, a symmetric minimum degree preordering is applied (see symmmd
and spparms
). If A
is found to be positive definite, the Cholesky factorization attempt is successful and requires less than half the time of a general factorization. Nonpositive definite matrices are usually detected almost immediately, so this check also requires little time. If successful, the Cholesky factorization is
A = R'*R
R
is upper triangular. The solution X
is computed by solving two triangular systems,
X = R\(R'\B)
A
is square, but not a permutation of a triangular matrix, or is not Hermitian with positive elements, or the Cholesky factorization fails, then a general triangular factorization is computed by Gaussian elimination with partial pivoting (see lu
). If A
is sparse, a nonsymmetric minimum degree preordering is applied (see colmmd
and spparms
). This results in
A = L*U
L
is a permutation of a lower triangular matrix and U
is an upper triangular matrix. Then X
is computed by solving two permuted triangular systems.
X = U\(L\B)
A
is not square and is full, then Householder reflections are used to compute an orthogonal-triangular factorization.
A*P = Q*R
P
is a permutation, Q
is orthogonal and R
is upper triangular (see qr
). The least squares solution X
is computed with
X = P*(R\(Q'*B)
A
is not square and is sparse, then the augmented matrix
S = [c*I A; A' 0]
spaugment
). The default value of the residual scaling factor is c = max(max(abs(A)))/1000
(see spparms
). The least squares solution X
and the residual
R = B-A*X
S * [R/c; X] = [B; 0]
ZGECO
, ZGEFA
and ZGESL
for square matrices and ZQRDC
and ZQRSL
for rectangular matrices. See the LINPACK User's Guide for details.
A
is singular:
Matrix is singular to working precision.
From element-wise division, if the divisor has zero elements:
Divide by zero.
On machines without IEEE arithmetic, like the VAX, the above two operations generate the error messages shown. On machines with IEEE arithmetic, only warning messages are generated. The matrix division returns a matrix with each element set to Inf
; the element-wise division produces NaN
s or Inf
s where appropriate.If the inverse was found, but is not reliable:
From matrix division, if a nonsquareWarning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = xxx
A
is rank deficient:
Warning: Rank deficient
,rank = xxx tol = xxx
det
,inv
,lu
,orth
,qr
,rcond
,rref
(c) Copyright 1994 by The MathWorks, Inc.