Arithmetic Operators + - * / \ ^ '

Purpose

Matrix and array arithmetic.

Synopsis

A+B
A-B
A*B         A.*B
A/B         A./B
A\B         A.\B
A^B         A.^B
A'           A.'

Description

MATLAB has two different types of arithmetic operations. Matrix arithmetic operations are defined by the rules of linear algebra. Array arithmetic operations are carried out element-by-element. The period or decimal point character (.) distinguishes the array operations from the matrix operations. However, since the matrix and array operations are the same for addition and subtraction, the character pairs .+ and .- are not used.

+
Addition. A+B adds A and B. A and B must have the same dimensions, unless one is a scalar. A scalar can be added to a matrix of any dimension.
-
Subtraction. A-B subtracts B from A. A and B must have the same dimensions, unless one is a scalar. A scalar can be subtracted from a matrix of any dimension.
*
Matrix multiplication. A*B is the linear algebraic product of the matrices A and B. The number of columns of A must equal the number of rows of B, unless one of them is a scalar. A scalar can multiply a matrix of any dimension.
.*
Array multiplication. A.*B is the element-by-element product of the arrays A and B. A and B must have the same dimension, unless one of them is a scalar.
\
Backslash or matrix left division. If A is a square matrix, A\B is roughly the same as inv(A)*B, except it is computed in a different way. If A is an n-by-n matrix and B is a column vector with n components, or a matrix with several such columns, then X = A\B is the solution to the equation AX = B computed by Gaussian elimination (see "Algorithm" for details). A warning message prints if A is badly scaled or nearly singular.
If A is an m-by-n matrix with m ~= n and B is a column vector with m components, or a matrix with several such columns, then X = A\B is the solution in the least squares sense to the under- or overdetermined system of equations AX = B. The effective rank, k, of A, is determined from the QR decomposition with pivoting (see "Algorithm" for details). A solution X is computed which has at most k nonzero components per column. If k < n, this is usually not be the same solution as pinv(A)*B, which is the least squares solution with the smallest residual norm, ||AX-B||.
.\
Array left division. A.\B is the matrix with elements B(i,j)/A(i,j). A and B must have the same dimensions, unless one of them is a scalar.
/
Slash or matrix right division. B/A is roughly the same as B*inv(A). More precisely, B/A = (A'\B')'. See \.
./
Array right division. A./B is the matrix with elements A(i,j)/B(i,j). A and B must have the same dimensions, unless one of them is a scalar.
^
Matrix power. X^p is X to the power p, if p is a scalar. If p is an integer, the power is computed by repeated multiplication. If the integer is negative, X is inverted first. For other values of p, the calculation involves eigenvalues and eigenvectors, such that if [V,D] = eig(X), then X^p = V*D.^p/V.
If x is a scalar and P is a matrix, x^P is x raised to the matrix power P using eigenvalues and eigenvectors. X^P, where X and P are both matrices, is an error.
.^
Array power. A.^B is the matrix with elements A(i,j) to the B(i,j) power. A and B must have the same dimensions, unless one of them is a scalar.
'
Matrix transpose. A' is the linear algebraic transpose of A. For complex matrices, this involves the complex conjugate transpose.
.'
Array transpose. A.' is the array transpose of A. For complex matrices, this does not involve conjugation.

Example

Here are two vectors, and the results of various matrix and array operations on them, printed with format rat.

----------------------------------------------------------------------------------------
Matrix Operations                                     Array Operations                    
----------------------------------------------------------------------------------------
x                  1                                  y                   4               
                   2                                                      5               
                   3                                                      6               
x'                 1     2      3                     y'                  4     5      6  
x+y                5                                  x-y                -3               
                   7                                                     -3               
                   9                                                     -3               
x + 2              3                                  x-2                -1               
                   4                                                      0               
                   5                                                      1               
x * y              Error                              x.*y                4               
                                                                         10                
                                                                         18                
x'*y               32                                 x'.*y              Error             
x*y'                4      5      6                   x.*y'              Error             
                    8     10     12                                                        
                   12     15     18                                                         
x*2                2                                  x.*2                2               
                   4                                                      4               
                   6                                                      6               
x\y                16/7                               x.\y                4               
                                                                         5/2               
                                                                          2               
2\x                1/2                                2./x                2               
                    1                                                     1               
                   3/2                                                   2/3               
x/y                0      0      1/6                  x./y               1/4               
                   0      0      1/3                                     2/5               
                   0      0      1/2                                     1/2               
x/2                1/2                                x./2               1/2               
                    1                                                     1               
                   3/2                                                   3/2               
x^y                Error                              x.^y                 1             
                                                                          32              
                                                                         729               
x^2                Error                              x.^2                 1             
                                                                           4             
                                                                           9             
2^x                Error                              2.^x                 2             
                                                                           4             
                                                                           8             
(x+i*y)'           1 - 4i      2 - 5i      3 - 6i                                      
(x+i*y).'          1 + 4i      2 + 5i      3 + 6i                                         
----------------------------------------------------------------------------------------

Algorithm

The specific algorithm used for solving the simultaneous linear equations denoted by X = A\B and X = B/A depends upon the structure of the coefficient matrix A.

  • If A is a triangular matrix, or a permutation of a triangular matrix, then X can be computed quickly by a permuted backsubstitution algorithm. The check for triangularity is done for full matrices by testing for zero elements and for sparse matrices by accessing the sparse data structure. Most nontriangular matrices are detected almost immediately, so this check requires a negligible amount of time.
  • If A is symmetric, or Hermitian, and has positive diagonal elements, then a Cholesky factorization is attempted (see chol). If A is sparse, a symmetric minimum degree preordering is applied (see symmmd and spparms). If A is found to be positive definite, the Cholesky factorization attempt is successful and requires less than half the time of a general factorization. Nonpositive definite matrices are usually detected almost immediately, so this check also requires little time. If successful, the Cholesky factorization is
    A = R'*R
    
    where R is upper triangular. The solution X is computed by solving two triangular systems,
    X = R\(R'\B)
    
  • If A is square, but not a permutation of a triangular matrix, or is not Hermitian with positive elements, or the Cholesky factorization fails, then a general triangular factorization is computed by Gaussian elimination with partial pivoting (see lu). If A is sparse, a nonsymmetric minimum degree preordering is applied (see colmmd and spparms). This results in
    A = L*U
    
    where L is a permutation of a lower triangular matrix and U is an upper triangular matrix. Then X is computed by solving two permuted triangular systems.
    X = U\(L\B)
    
  • If A is not square and is full, then Householder reflections are used to compute an orthogonal-triangular factorization.
    A*P = Q*R
    
    where P is a permutation, Q is orthogonal and R is upper triangular (see qr). The least squares solution X is computed with
    X = P*(R\(Q'*B)
    
  • If A is not square and is sparse, then the augmented matrix
    S = [c*I A; A' 0]
    
    is formed (see spaugment). The default value of the residual scaling factor is c = max(max(abs(A)))/1000 (see spparms). The least squares solution X and the residual
    R = B-A*X
    
    are computed by solving
    S * [R/c; X] = [B; 0]
    
    with minimum degree preordering and sparse Gaussian elimination with numerical pivoting.
    The various matrix factorizations are computed by MATLAB implementations of the algorithms employed by LINPACK routines ZGECO, ZGEFA and ZGESL for square matrices and ZQRDC and ZQRSL for rectangular matrices. See the LINPACK User's Guide for details.

    Diagnostics

    From matrix division, if a square A is singular:

    Matrix is singular to working precision.
    
    From element-wise division, if the divisor has zero elements:

    Divide by zero.
    
    On machines without IEEE arithmetic, like the VAX, the above two operations generate the error messages shown. On machines with IEEE arithmetic, only warning messages are generated. The matrix division returns a matrix with each element set to Inf; the element-wise division produces NaNs or Infs where appropriate.

    If the inverse was found, but is not reliable:

    Warning: Matrix is close to singular or badly scaled.
        Results may be inaccurate.  RCOND = xxx
    
    From matrix division, if a nonsquare A is rank deficient:

    Warning: Rank deficient, rank = xxx tol = xxx
    

    See Also

     det, inv, lu, orth, qr, rcond, rref
    

    References

    [1] J.J. Dongarra, J.R. Bunch, C.B. Moler, and G.W. Stewart, LINPACK
    User's Guide
    , SIAM, Philadelphia, 1979.

    (c) Copyright 1994 by The MathWorks, Inc.