SUPPORT THE WORK

GetWiki

Matrix (mathematics)

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
Matrix (mathematics)
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Short description|Array of numbers}}{{hatnote group|{{Redirect|Matrix theory|the physics topic|Matrix theory (physics)}}{{Other uses of|Matrix}}}}File:MatrixLabelled.svg|alt=Two tall square brackets with m-many rows each containing n-many subscripted letter 'a' variables. Each letter 'a' is given a row number and column number as its subscript.|thumb|An {{math|m × n}} matrix: the {{math|m}} rows are horizontal and the {{math|n}} columns are vertical. Each element of a matrix is often denoted by a variable with two subscriptssubscriptsIn mathematics, a matrix ({{plural form}}: matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.For example, begin{bmatrix}1 & 9 & -13 20 & 5 & -6 end{bmatrix}is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a "2times 3 matrix", or a matrix of dimension 2times 3.Matrices are used to represent linear maps and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents the composition of linear maps.Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices.However, in the case of adjacency matrices, matrix multiplication or a variant of it allows the simultaneous computation of the number of paths between any two vertices, and of the shortest length of a path between two vertices. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such.Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated to the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant, and the eigenvalues of a square matrix are the roots of a polynomial determinant.In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimension. Matrices are used in most areas of mathematics and most scientific fields, either directly, or through their use in geometry and numerical analysis.Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.

Definition

A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication.{{Harvard citations |author=Lang |year=2002 |nb=yes}} Most commonly, a matrix over a field F is a rectangular array of elements of F.{{harvtxt|Fraleigh|1976|p=209}}{{harvtxt|Nering|1970|p=37}} A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix:
mathbf{A} = begin{bmatrix}
-1.3 & 0.6 20.4 & 5.5
9.7 & -6.2
end{bmatrix}.The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.

Size

The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the numbers of rows and columns, that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with {m} rows and {n} columns is called an {m times n} matrix, or {m}-by-{n} matrix, where {m} and {n} are called its dimensions. For example, the matrix {mathbf{A}} above is a {3times2} matrix.Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix.WEB, Weisstein, Eric W., Matrix,weblink 2020-08-19, mathworld.wolfram.com, en, A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix.{| class="wikitable"|+Overview of a matrix size!scope="col"| Name!scope="col"| Size!scope="col"| Example!scope="col"| Description!scope="col"| Notation
!scope="row"| Row vector
n > begin{bmatrix}3 & 7 & 2 end{bmatrix}| A matrix with one row, sometimes used to represent a vector {a_i}
!scope="col"| Column vector
n{{nbsp}}×{{nbsp}}1 > begin{bmatrix}4 1 8 end{bmatrix}| A matrix with one column, sometimes used to represent a vector {a_j}
!scope="col"| Square matrix
n{{nbsp}}×{{nbsp}}n > begin{bmatrix}9 & 13 & 5 1 & 11 & 7 2 & 6 & 3end{bmatrix}#Linear transformations>linear transformation from a vector space to itself, such as reflection (mathematics), rotation (mathematics)>rotation, or shearing. {mathbf{A}}

Notation

The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an m times n matrix mathbf{A} is represented asmathbf{A} =begin{bmatrix}a_{11} & a_{12} & cdots & a_{1n} a_{21} & a_{22} & cdots & a_{2n} vdots & vdots & ddots & vdots a_{m1} & a_{m2} & cdots & a_{mn}end{bmatrix} =begin{pmatrix}a_{11} & a_{12} & cdots & a_{1n} a_{21} & a_{22} & cdots & a_{2n} vdots & vdots & ddots & vdots a_{m1} & a_{m2} & cdots & a_{mn}end{pmatrix}.This may be abbreviated by writing only a single generic term, possibly along with indices, as inmathbf{A} = left(a_{ij}right), quad left[ a_{ij}right], quad text{or} quad left(a_{ij}right)_{1leq ileq m, ; 1leq jleq n}or mathbf{A}=(a_{i,j})_{1leq i,jleq n} in the case that n=m.Matrices are usually symbolized using upper-case letters (such as {mathbf{A}} in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., {a_{11}}, or {a_{1,1}}), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface roman (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, as in underline{underline{A}}.The entry in the {{math|i}}-th row and {{math|j}}-th column of a matrix {{math|A}} is sometimes referred to as the {i,j} or {(i,j)} entry of the matrix, and commonly denoted by {a_{i,j}} or {a_{ij}}. Alternative notations for that entry are {mathbf{A}[i,j]} and {mathbf{A}_{i,j}}. For example, the (1,3) entry of the following matrix mathbf{A} is {{math|5}} (also denoted {a_{1 3}}, {a_{1,3}}, mathbf{A}[1,3] or {{mathbf{A}}_{1,3}}):
mathbf{A}=begin{bmatrix}
4 & -7 & color{red}{5} & 0
-2 & 0 & 11 & 8 19 & 1 & -3 & 12end{bmatrix}Sometimes, the entries of a matrix can be defined by a formula such as a_{i,j}=f(i,j). For example, each of the entries of the following matrix mathbf{A} is determined by the formula a_{ij}=i-j.
mathbf A = begin{bmatrix}
1 & 0 & -1 & -22 & 1 & 0 & -1end{bmatrix}In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses. For example, the matrix above is defined as {mathbf{A}}=[i-j] or {mathbf{A}}=((i-j)). If matrix size is mtimes n, the above-mentioned formula f(i,j) is valid for any i=1,dots,m and any j=1,dots,n. This can be either specified separately, or indicated using mtimes n as a subscript. For instance, the matrix mathbf{A} above is 3times4, and can be defined as {mathbf{A}}=[i-j] (i=1,2,3; j=1,dots,4) or {mathbf{A}}=[i-j]_{3times4}.Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an {{math|m}}-by-{{math|n}} matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an {{math|m}}-by-{{math|n}} matrix are indexed by 0leq i leq m-1 and 0leq j leq n-1.{{Harvard citations |last1=Oualline |year=2003 |loc=Ch. 5 |nb=yes}} This article follows the more common convention in mathematical writing where enumeration starts from {{math|1}}.The set of all {{math|m}}-by-{{math|n}} real matrices is often denoted mathcal{M}(m, n), or mathcal{M}_{m times n}(R). The set of all {{math|m}}-by-{{math|n}} matrices over another field, or over a ring {{math|R}}, is similarly denoted mathcal{M}(m, n, R), or mathcal{M}_{m times n}(R). If {{math|m{{nbsp}}{{=}}{{nbsp}}n}}, such as in the case of square matrices, one does not repeat the dimension: mathcal{M}(n, R), or {{nowrap|mathcal{M}_n(R).

Matrix multiplication

(File:MatrixMultiplication.png|thumb|300px|Schematic depiction of the matrix product AB of two matrices A and B)Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B:Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (that is, n×1-matrix) of n variables x{{sub|1}}, x{{sub|2}}, ..., x{{sub|n}}, and b is an m×1-column vector, then the matrix equation
m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the implicit function theorem.{{Harvard citations |last1=Lang |year=1987a |nb=yes |loc=Ch. XVI.5}}. For a more advanced, and more general statement see {{Harvard citations|last1=Lang|year=1969|nb=yes|loc=Ch. VI.2}}
Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has a decisive influence on the set of possible solutions of the equation in question.{{Harvard citations |last1=Gilbarg |last2=Trudinger |year=2001 |nb=yes}}The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen concerning a sufficiently fine grid, which in turn can be recast as a matrix equation.{{Harvard citations |last1=Å olin |year=2005 |nb=yes |loc=Ch. 2.5}}. See also stiffness method.{{Clear}}

Probability theory and statistics

File:Markov chain SVG.svg|right|thumb|280px|Two different Markov chains. The chart depicts the number of particles (of a total of 1000) in state "2". Both limiting values can be determined from the transition matrices, which are given by begin{bmatrix}
0.7 & 0
0.3 & 1
end{bmatrix} (red) and begin{bmatrix}
0.7 & 0.2
0.3 & 0.8
end{bmatrix} (black).Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states.{{Harvard citations |last1=Latouche |last2=Ramaswami |year=1999 |nb=yes}} A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain-like absorbing states, that is, states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.{{Harvard citations |last1=Mehata |last2=Srinivasan |year=1978 |nb=yes |loc=Ch. 2.8}}Statistics also makes use of matrices in many different forms.{{Citation |last=Healy |first=Michael |title=Matrices for Statistics |year=1986 |publisher=Oxford University Press |isbn=978-0-19-850702-4 |author-link=Michael Healy (statistician)}} Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables.{{Harvard citations |last1=Krzanowski |year=1988 |loc=Ch. 2.2., p. 60 |nb=yes}} Another technique using matrices are linear least squares, a method that approximates a finite set of pairs (x{{sub|1}}, y{{sub|1}}), (x{{sub|2}}, y{{sub|2}}), ..., (x{{sub|N}}, y{{sub|N}}), by a linear function
y{{sub|i}} ≈ ax{{sub|i}} + b, i = 1, ..., N
which can be formulated in terms of matrices, related to the singular value decomposition of matrices.{{Harvard citations |last1=Krzanowski |year=1988 |loc=Ch. 4.1 |nb=yes}}Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.{{Harvard citations |authorlink1=Brian Conrey|last1=Conrey |year=2007 |nb=yes}}{{Harvard citations |last1=Zabrodin |last2=Brezin |last3=Kazakov |last4=Serban |last5=Wiegmann |year=2006 |nb=yes}}

Symmetries and transformations in physics

{{Further|Symmetry in physics}}Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.{{Harvard citations |last1=Itzykson |last2=Zuber |year=1980 |nb=yes |loc=Ch. 2}} For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.see {{Harvard citations |last1=Burgess |last2=Moore |year=2007 |nb=yes |loc=section 1.6.3. (SU(3)), section 2.4.3.2. (Kobayashi–Maskawa matrix)}}

Linear combinations of quantum states

The first model of quantum mechanics (Heisenberg, 1925) represented the theory's operators by infinite-dimensional matrices acting on quantum states.{{Harvard citations |last1=Schiff |year=1968 |nb=yes |loc=Ch. 6}} This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" eigenstates.{{Harvard citations |last1=Bohm |year=2001 |nb=yes |loc=sections II.4 and II.8}}Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.{{Harvard citations |last1=Weinberg |year=1995 |nb=yes |loc=Ch. 3}}

Normal modes

A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms.{{Harvard citations |last1=Wherrett |year=1987 |nb=yes |loc=part II}} They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.{{Harvard citations |last1=Riley |last2=Hobson |last3=Bence |year=1997 |nb=yes |loc=7.17}}

Geometrical optics

Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies.The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.{{Harvard citations |last1=Guenther |year=1990 |nb=yes |loc=Ch. 5}}

Electronics

Traditional mesh analysis and nodal analysis in electronics lead to a system of linear equations that can be described with a matrix.The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector with the component's input voltage v{{sub|1}} and input current i{{sub|1}} as its elements, and let B be a 2-dimensional vector with the component's output voltage v{{sub|2}} and output current i{{sub|2}} as its elements. Then the behaviour of the electronic component can be described by B = H · A, where H is a 2 x 2 matrix containing one impedance element (h{{sub|12}}), one admittance element (h{{sub|21}}), and two dimensionless elements (h{{sub|11}} and h{{sub|22}}). Calculating a circuit now reduces to multiplying matrices.

History

Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text The Nine Chapters on the Mathematical Art written in 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations,{{Harvard citations |last1=Shen |last2=Crossley |last3=Lun |year=1999 |nb=yes}} cited by {{Harvard citations |last1=Bretscher |year=2005|nb=yes|loc=p. 1}} including the concept of determinants. In 1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published Ars Magna.Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 {{ISBN|978-0-321-07912-1}}, p. 564-565 The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683.BOOK, Needham, Joseph, Joseph Needham, Wang Ling, Wang Ling (historian), Science and Civilisation in China,weblink III, 1959, Cambridge University Press, Cambridge, 978-0-521-05801-8, 117, The Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves (1659).Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 {{ISBN|978-0-321-07912-1}}, p. 564 Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays. Cramer presented his rule in 1750.The term "matrix" (Latin for "womb", "dam" (non-human female animal kept for breeding), "source", "origin", "list", "register", derived from (wikt:mater#Latin|mater)—mother{{Citation |url=https://merriam-webster.com/dictionary/matrix |title=Merriam-Webster dictionary |access-date=April 20, 2009 |publisher=Merriam-Webster}}) was coined by James Joseph Sylvester in 1850,Although many sources state that J. J. Sylvester coined the mathematical term "matrix" in 1848, Sylvester published nothing in 1848. (For proof that Sylvester published nothing in 1848, see: J. J. Sylvester with H. F. Baker, ed., The Collected Mathematical Papers of James Joseph Sylvester (Cambridge, England: Cambridge University Press, 1904), vol. 1.) His earliest use of the term "matrix" occurs in 1850 in J. J. Sylvester (1850) "Additions to the articles in the September number of this journal, "On a new class of theorems," and on Pascal's theorem," The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 37: 363-370. From page 369: "For this purpose, we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This does not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants ... " who understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains:The Collected Mathematical Papers of James Joseph Sylvester: 1837–1853, Paper 37, p. 247{{Blockquote|I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent.}}Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead, he defined operations such as addition, subtraction, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held true. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition. Early matrix theory had limited the use of arrays almost exclusively to determinants and Arthur Cayley's abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858 Cayley published his A memoir on the theory of matricesPhil.Trans. 1858, vol.148, pp.17-37 Math. Papers II 475-496{{Harvard citations |editor1-last=Dieudonné |year=1978 |loc=Vol. 1, Ch. III, p. 96 |nb=yes}} in which he proposed and demonstrated the Cayley–Hamilton theorem.The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use of the notation A = [a{{sub|i,j}}] to represent a matrix where a{{sub|i,j}} refers to the ith row and the jth column.The modern study of determinants sprang from several sources.{{Harvard citations |last1=Knobloch |year=1994 |nb=yes}} Number-theoretical problems led Gauss to relate coefficients of quadratic forms, that is, expressions such as {{nowrap|x{{sup|2}} + xy − 2y{{sup|2}},}} and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = [a{{sub|i,j}}] the following: replace the powers a{{sub|j}}{{sup|k}} by a{{sub|jk}} in the polynomial
a_1 a_2 cdots a_n prod_{i < j} (a_j - a_i);,
where textstyleprod denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real.{{Harvard citations |last1=Hawkins |year=1975 |nb=yes}} Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—which can be used to describe geometric transformations at a local (or infinitesimal) level, see above. Kronecker's Vorlesungen über die Theorie der Determinanten{{Harvard citations |last1=Kronecker |editor1-last=Hensel |year=1897 |nb=yes}} and Weierstrass' Zur Determinantentheorie,{{Harvard citations |last1=Weierstrass |year=1915 |volume=3 |loc=pp. 271–286 |nb=yes}} both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established.Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century, the Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Wilhelm Jordan. In the early 20th century, matrices attained a central role in linear algebra,{{Harvard citations |last1=Bôcher |year=2004 |nb=yes}} partially due to their use in classification of the hypercomplex number systems of the previous century.The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns.{{Harvard citations |last1=Mehra |last2=Rechenberg |year=1987 |nb=yes}} Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions.

Other historical usages of the word "matrix" in mathematics

The word has been used in unusual ways by at least two authors of historical importance.Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word "matrix" in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the "bottom" (0 order) the function is identical to its extension:Whitehead, Alfred North; and Russell, Bertrand (1913) Principia Mathematica to *56, Cambridge at the University Press, Cambridge UK (republished 1962) cf page 162ff.{{Blockquote|Let us give the name of matrix to any function, of however many variables, that does not involve any apparent variables. Then, any possible function other than a matrix derives from a matrix by means of generalization, that is, by considering the proposition that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined.}}For example, a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single variable, for example, y, by "considering" the function for all possible values of "individuals" a{{sub|i}} substituted in place of variable x. And then the resulting collection of functions of the single variable y, that is, {{math|∀a{{sub|i}}: Φ(a{{sub|i}}, y)}}, can be reduced to a "matrix" of values by "considering" the function for all possible values of "individuals" b{{sub|i}} substituted in place of variable y:
{{math|∀b{{sub|j}}∀a{{sub|i}}: Φ(a{{sub|i}}, b{{sub|j}}).}}
Alfred Tarski in his 1946 Introduction to Logic used the word "matrix" synonymously with the notion of truth table as used in mathematical logic.Tarski, Alfred; (1946) Introduction to Logic and the Methodology of Deductive Sciences, Dover Publications, Inc, New York NY, {{ISBN|0-486-28462-X}}.

See also

{{Div col|colwidth=30em}}
  • List of named matrices
  • {{annotated link|Algebraic multiplicity}}
  • {{annotated link|Geometric multiplicity}}
  • {{annotated link|Gram–Schmidt process}}
  • Irregular matrix
  • {{annotated link|Matrix calculus}}
  • {{annotated link|Matrix function}}
  • Matrix multiplication algorithm
  • Tensor — A generalization of matrices with any number of indices
  • {{Annotated link|Bohemian matrices}}{{div col end}}

Notes

{{Reflist|colwidth=30em}}{{Reflist|group=nb|3}}

References

  • {{Citation |first1=Howard |last1=Anton |year=1987 |isbn=0-471-84819-0 |title=Elementary Linear Algebra |edition=5th |publisher=Wiley |location=New York}}
  • {{Citation |last1=Arnold |first1=Vladimir I. |author1-link=Vladimir Arnold |last2=Cooke |first2=Roger |author2-link=Roger Cooke (mathematician) |title=Ordinary differential equations |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |isbn=978-3-540-54813-3 |year=1992}}
  • {{Citation |last1=Artin |first1=Michael |author1-link=Michael Artin |title=Algebra |publisher=Prentice Hall |isbn=978-0-89871-510-1 |year=1991}}
  • {{Citation |last1=Association for Computing Machinery |title=Computer Graphics |publisher=Tata McGraw–Hill |isbn=978-0-07-059376-3 |year=1979}}
  • {{Citation |last1=Baker |first1=Andrew J. |title=Matrix Groups: An Introduction to Lie Group Theory |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |isbn=978-1-85233-470-3 |year=2003 |url-access=registration |url=https://archive.org/details/matrixgroupsintr0000bake }}
  • {{Citation |last1=Bau III |first1=David |last2=Trefethen |first2=Lloyd N. |author2-link=Lloyd N. Trefethen |title=Numerical linear algebra |publisher=Society for Industrial and Applied Mathematics |location=Philadelphia, PA |isbn=978-0-89871-361-9 |year=1997}}
  • {{Citation |first1=Raymond A. |last1=Beauregard |first2=John B. |last2=Fraleigh |year=1973 |isbn=0-395-14017-X |title=A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields |publisher=Houghton Mifflin Co. |location=Boston |url-access=registration |url=https://archive.org/details/firstcourseinlin0000beau }}
  • {{Citation |last1=Bretscher |first1=Otto |title=Linear Algebra with Applications |publisher=Prentice Hall |edition=3rd |year=2005}}
  • {{Citation |first1=Richard |last1=Bronson |year=1970 |lccn=70097490 |title=Matrix Methods: An Introduction |publisher=Academic Press |location=New York}}
  • {{Citation |last1=Bronson |first1=Richard |title=Schaum's outline of theory and problems of matrix operations |publisher=McGraw–Hill |location=New York |isbn=978-0-07-007978-6 |year=1989}}
  • {{Citation |last1=Brown |first1=William C. |title=Matrices and vector spaces |publisher=Marcel Dekker |location=New York, NY |isbn=978-0-8247-8419-5 |year=1991 |url-access=registration |url=https://archive.org/details/matricesvectorsp0000brow }}
  • {{Citation |last1=Coburn |first1=Nathaniel |title=Vector and tensor analysis |publisher=Macmillan |location=New York, NY |oclc=1029828 |year=1955}}
  • {{Citation |last1=Conrey |first1=J. Brian |title=Ranks of elliptic curves and random matrix theory |publisher=Cambridge University Press |isbn=978-0-521-69964-8 |year=2007}}
  • {{Citation |first1=John B. |last1=Fraleigh |year=1976 |isbn=0-201-01984-1 |title=A First Course In Abstract Algebra |edition=2nd |publisher=Addison-Wesley |location=Reading}}
  • {{Citation |last1=Fudenberg |first1=Drew |last2=Tirole |first2=Jean |author2-link=Jean Tirole |title=Game Theory |publisher=MIT Press |year=1983}}
  • {{Citation |last1=Gilbarg |first1=David |last2=Trudinger |first2=Neil S. |author2-link=Neil Trudinger |title=Elliptic partial differential equations of second order |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |edition=2nd |isbn=978-3-540-41160-4 |year=2001}}
  • {{Citation |first1=Chris |last1=Godsil |author-link1=Chris Godsil |first2=Gordon |last2=Royle |author-link2=Gordon Royle |title=Algebraic Graph Theory |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |series=Graduate Texts in Mathematics |isbn=978-0-387-95220-8 |year=2004 |volume=207}}
  • {{Citation |last1=Golub |first1=Gene H. |author1-link=Gene H. Golub |last2=Van Loan |first2=Charles F. |author2-link=Charles F. Van Loan |title=Matrix Computations |publisher=Johns Hopkins |edition=3rd |isbn=978-0-8018-5414-9 |year=1996}}
  • {{Citation |last1=Greub |first1=Werner Hildbert |title=Linear algebra |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |series=Graduate Texts in Mathematics |isbn=978-0-387-90110-7 |year=1975}}
  • {{Citation |last1=Halmos |first1=Paul Richard |author1-link=Paul Halmos |title=A Hilbert space problem book |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |edition=2nd |series=Graduate Texts in Mathematics |isbn=978-0-387-90685-0 |mr=675952 |year=1982 |volume=19}}
  • {{Citation |last1=Horn |first1=Roger A. |author1-link=Roger Horn |last2=Johnson |first2=Charles R. |author2-link=Charles Royal Johnson |title=Matrix Analysis |publisher=Cambridge University Press |isbn=978-0-521-38632-6 |year=1985}}
  • {{Citation |last1=Householder |first1=Alston S. |title=The theory of matrices in numerical analysis |publisher=Dover Publications |location=New York, NY |mr=0378371 |year=1975}}
  • {{Citation |first1=Erwin |last1=Kreyszig |year=1972 |isbn=0-471-50728-8 |title=Advanced Engineering Mathematics |edition=3rd |publisher=Wiley |location=New York |url=https://archive.org/details/advancedengineer00krey}}.
  • {{Citation |last1=Krzanowski |first1=Wojtek J. |title=Principles of multivariate analysis |publisher=The Clarendon Press Oxford University Press |series=Oxford Statistical Science Series |isbn=978-0-19-852211-9 |mr=969370 |year=1988 |volume=3}}
  • {{Citation |editor1-last=Itô |editor1-first=Kiyosi |title=Encyclopedic dictionary of mathematics. Vol. I-IV |publisher=MIT Press |edition=2nd |isbn=978-0-262-09026-1 |mr=901762 |year=1987}}
  • {{Citation |last1=Lang |first1=Serge |author1-link=Serge Lang |title=Analysis II |publisher=Addison-Wesley |year=1969}}
  • {{Citation |last1=Lang |first1=Serge |title=Calculus of several variables |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |edition=3rd |isbn=978-0-387-96405-8 |year=1987a |url=https://archive.org/details/calculusofsevera0000lang}}
  • {{Citation |last1=Lang |first1=Serge |title=Linear algebra |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |isbn=978-0-387-96412-6 |year=1987b}}
  • {{Citation |last1=Latouche |first1=Guy |last2=Ramaswami |first2=Vaidyanathan |title=Introduction to matrix analytic methods in stochastic modeling |publisher=Society for Industrial and Applied Mathematics |location=Philadelphia, PA |edition=1st |isbn=978-0-89871-425-8 |year=1999}}
  • {{Citation |last1=Manning |first1=Christopher D. |last2=Schütze |first2=Hinrich |title=Foundations of statistical natural language processing |publisher=MIT Press |isbn=978-0-262-13360-9 |year=1999}}
  • {{Citation |last1=Mehata |first1=K. M. |last2=Srinivasan |first2=S. K. |title=Stochastic processes |publisher=McGraw–Hill |location=New York, NY |isbn=978-0-07-096612-3 |year=1978}}
  • {{Citation |last1=Mirsky |first1=Leonid |author-link=Leon Mirsky |title=An Introduction to Linear Algebra |url=https://books.google.com/books?id=ULMmheb26ZcC&q=linear+algebra+determinant&pg=PA1 |publisher=Courier Dover Publications |isbn=978-0-486-66434-7 |year=1990}}
  • {{Citation |first1=Evar D. |last1=Nering |year=1970 |title=Linear Algebra and Matrix Theory |edition=2nd |publisher=Wiley |location=New York |lccn=76-91646}}
  • {{Citation |last1=Nocedal |first1=Jorge |last2=Wright |first2=Stephen J. |title=Numerical Optimization |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |edition=2nd |isbn=978-0-387-30303-1 |year=2006 |page=449}}
  • {{Citation |last1=Oualline |first1=Steve |title=Practical C++ programming |publisher=O'Reilly |isbn=978-0-596-00419-4 |year=2003}}
  • {{Citation |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes in FORTRAN: The Art of Scientific Computing |chapter-url=https://mpi-hd.mpg.de/astrophysik/HEA/internal/Numerical_Recipes/f2-3.pdf |publisher=Cambridge University Press |edition=2nd |year=1992 |chapter=LU Decomposition and Its Applications |pages=34–42 |url-status=unfit |archive-url=https://web.archive.org/web/20090906113144weblink |archive-date=2009-09-06}}
  • {{Citation |last1=Punnen |first1=Abraham P. |last2=Gutin |first2=Gregory |title=The traveling salesman problem and its variations |publisher=Kluwer Academic Publishers |location=Boston, MA |isbn=978-1-4020-0664-7 |year=2002}}
  • {{Citation |last1=Reichl |first1=Linda E.|author-link=Linda Reichl |title=The transition to chaos: conservative classical systems and quantum manifestations |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |isbn=978-0-387-98788-0 |year=2004}}
  • {{Citation |last1=Rowen |first1=Louis Halle |title=Graduate Algebra: noncommutative view |publisher=American Mathematical Society |location=Providence, RI |isbn=978-0-8218-4153-2 |year=2008}}
  • {{Citation |last1=Å olin |first1=Pavel |title=Partial Differential Equations and the Finite Element Method |publisher=Wiley-Interscience |isbn=978-0-471-76409-0 |year=2005}}
  • {{Citation |last1=Stinson |first1=Douglas R. |title=Cryptography |publisher=Chapman & Hall/CRC |series=Discrete Mathematics and its Applications |isbn=978-1-58488-508-5 |year=2005}}
  • {{Citation |last1=Stoer |first1=Josef |last2=Bulirsch |first2=Roland |title=Introduction to Numerical Analysis |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |edition=3rd |isbn=978-0-387-95452-3 |year=2002}}
  • {{Citation |last1=Ward |first1=J. P. |title=Quaternions and Cayley numbers |publisher=Kluwer Academic Publishers Group |location=Dordrecht, NL |series=Mathematics and its Applications |isbn=978-0-7923-4513-8 |mr=1458894 |year=1997 |volume=403 |doi=10.1007/978-94-011-5768-1 |url-access=registration |url=https://archive.org/details/quaternionscayle0000ward }}
  • {{Citation |last1=Wolfram |first1=Stephen |author1-link=Stephen Wolfram |title=The Mathematica Book |publisher=Wolfram Media |location=Champaign, IL |edition=5th |isbn=978-1-57955-022-6 |year=2003}}

Physics references

  • {{Citation |last=Bohm |first=Arno |title=Quantum Mechanics: Foundations and Applications |publisher=Springer |year=2001 |isbn=0-387-95330-2}}
  • {{Citation |last1=Burgess |first1=Cliff |last2=Moore |first2=Guy |title=The Standard Model. A Primer |publisher=Cambridge University Press |year=2007 |isbn=978-0-521-86036-9}}
  • {{Citation |last=Guenther |first=Robert D. |title=Modern Optics |publisher=John Wiley |year=1990 |isbn=0-471-60538-7}}
  • {{Citation |last1=Itzykson |first1=Claude |last2=Zuber |first2=Jean-Bernard |title=Quantum Field Theory |publisher=McGraw–Hill |year=1980 |isbn=0-07-032071-3 |url-access=registration |url=https://archive.org/details/quantumfieldtheo0000itzy }}
  • {{Citation |last1=Riley |first1=Kenneth F. |last2=Hobson |first2=Michael P. |last3=Bence |first3=Stephen J. |title=Mathematical methods for physics and engineering |publisher=Cambridge University Press |year=1997 |isbn=0-521-55506-X}}
  • {{Citation |last=Schiff |first=Leonard I. |title=Quantum Mechanics |edition=3rd |publisher=McGraw–Hill |year=1968}}
  • {{Citation |last=Weinberg |first=Steven |title=The Quantum Theory of Fields. Volume I: Foundations |publisher=Cambridge University Press |year=1995 |isbn=0-521-55001-7 |url=https://archive.org/details/quantumtheoryoff00stev}}
  • {{Citation |last=Wherrett |first=Brian S. |year=1987 |title=Group Theory for Atoms, Molecules and Solids |publisher=Prentice–Hall International |isbn=0-13-365461-3}}
  • {{Citation |last1=Zabrodin |first1=Anton |last2=Brezin |first2=Édouard |last3=Kazakov |first3=Vladimir |last4=Serban |first4=Didina |last5=Wiegmann |first5=Paul |title=Applications of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry) |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |isbn=978-1-4020-4530-1 |year=2006}}

Historical references

  • A. Cayley A memoir on the theory of matrices. Phil. Trans. 148 1858 17–37; Math. Papers II 475–496
  • {{Citation |last1=Bôcher |first1=Maxime |author1-link=Maxime Bôcher |title=Introduction to higher algebra |publisher=Dover Publications |location=New York, NY |isbn=978-0-486-49570-5 |year=2004}}, reprint of the 1907 original edition
  • {{Citation |last1=Cayley |first1=Arthur |author1-link=Arthur Cayley |title=The collected mathematical papers of Arthur Cayley |url=https://quod.lib.umich.edu/cgi/t/text/pageviewer-idx?c=umhistmath;cc=umhistmath;rgn=full%20text;idno=ABS3153.0001.001;didno=ABS3153.0001.001;view=image;seq=00000140 |publisher=Cambridge University Press |year=1889 |volume=I (1841–1853) |pages=123–126}}
  • {{Citation |editor1-last=Dieudonné |editor1-first=Jean |editor1-link=Jean Dieudonné |title=Abrégé d'histoire des mathématiques 1700-1900 |publisher=Hermann |location=Paris, FR |year=1978}}
  • {{Citation |last1=Hawkins |first1=Thomas |title=Cauchy and the spectral theory of matrices |mr=0469635 |year=1975 |journal=Historia Mathematica |issn=0315-0860 |volume=2 |pages=1–29 |doi=10.1016/0315-0860(75)90032-4|doi-access= }}
  • {{Citation |last1=Knobloch |first1=Eberhard | author-link = Eberhard Knobloch |title=The intersection of history and mathematics |publisher=Birkhäuser |location=Basel, Boston, Berlin |series=Science Networks Historical Studies |mr=1308079 |year=1994 |volume=15 |chapter=From Gauss to Weierstrass: determinant theory and its historical evaluations |pages=51–66}}
  • {{Citation |last1=Kronecker |first1=Leopold |author1-link=Leopold Kronecker |editor1-last=Hensel |editor1-first=Kurt |editor1-link=Kurt Hensel |title=Leopold Kronecker's Werke |url=https://quod.lib.umich.edu/cgi/t/text/text-idx?c=umhistmath;idno=AAS8260.0002.001 |publisher=Teubner |year=1897}}
  • {{Citation |last1=Mehra |first1=Jagdish |last2=Rechenberg |first2=Helmut |author-link=Jagdish Mehra|author-link2=Helmut Rechenberg|title=The Historical Development of Quantum Theory |publisher=Springer-Verlag |location=Berlin, DE; New York, NY |edition=1st |isbn=978-0-387-96284-9 |year=1987}}
  • {{Citation |last1=Shen |first1=Kangshen |last2=Crossley |first2=John N. |last3=Lun |first3=Anthony Wah-Cheung |title=Nine Chapters of the Mathematical Art, Companion and Commentary |publisher=Oxford University Press |edition=2nd |isbn=978-0-19-853936-0 |year=1999}}
  • {{Citation |last1=Weierstrass |first1=Karl |author1-link=Karl Weierstrass |title=Collected works |url=https://quod.lib.umich.edu/cgi/t/text/text-idx?c=umhistmath;idno=AAN8481.0003.001 |year=1915 |volume=3}}

Further reading

  • {{SpringerEOM|title=Matrix|id=p/m062780}}
  • {{Citation |last1=Kaw |first1=Autar K. |title=Introduction to Matrix Algebra |date=September 2008 |publisher=Lulu.com |url=https://autarkaw.com/books/matrixalgebra/index.html |isbn=978-0-615-25126-4}}
  • {{Citation |title=The Matrix Cookbook |url=https://math.uwaterloo.ca/~hwolkowi//matrixcookbook.pdf |access-date=24 March 2014 }}
  • {{Citation |last1=Brookes |first1=Mike |title=The Matrix Reference Manual |url=https://ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html |publisher=Imperial College |location=London |year=2005 |access-date=10 Dec 2008}}

External links

{{sister project links|d=y|c=Category:matrix|b=Linear Algebra|v=Linear algebra#Matrices|s=no|m=no|mw=no|wikt=matrix|voy=no|species=no|q=no|n=no}} {{Good article}}{{Linear algebra}}{{Tensors}}{{authority control}}

- content above as imported from Wikipedia
- "Matrix (mathematics)" does not exist on GetWiki (yet)
- time: 1:31am EDT - Sat, May 18 2024
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 23 MAY 2022
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
CONNECT