SUPPORT THE WORK

GetWiki

Gram matrix

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
Gram matrix
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{short description|Matrix of inner products of a set of vectors}}In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,dots, v_n in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product G_{ij} = leftlangle v_i, v_j rightrangle.{{harvnb|Horn|Johnson|2013|p=441}}, p.441, Theorem 7.2.10 If the vectors v_1,dots, v_n are the columns of matrix X then the Gram matrix is X^dagger X in the general case that the vector coordinates are complex numbers, which simplifies to X^top X for the case that the vector coordinates are real numbers.An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.It is named after Jørgen Pedersen Gram.

Examples

For finite-dimensional real vectors in mathbb{R}^n with the usual Euclidean dot product, the Gram matrix is G = V^top V, where V is a matrix whose columns are the vectors v_k and V^top is its transpose whose rows are the vectors v_k^top. For complex vectors in mathbb{C}^n, G = V^dagger V, where V^dagger is the conjugate transpose of V.Given square-integrable functions {ell_i(cdot),, i = 1,dots,n} on the interval left[t_0, t_fright], the Gram matrix G = left[G_{ij}right] is:
G_{ij} = int_{t_0}^{t_f} ell_i^*(tau)ell_j(tau), dtau.
where ell_i^*(tau) is the complex conjugate of ell_i(tau).For any bilinear form B on a finite-dimensional vector space over any field we can define a Gram matrix G attached to a set of vectors v_1, dots, v_n by G_{ij} = Bleft(v_i, v_jright). The matrix will be symmetric if the bilinear form B is symmetric.

Applications

  • In Riemannian geometry, given an embedded k-dimensional Riemannian manifold Msubset mathbb{R}^n and a parametrization phi: Uto M for {{nowrap|(x_1, ldots, x_k)in Usubsetmathbb{R}^k,}} the volume form omega on M induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: omega = sqrt{det G} dx_1 cdots dx_k,quad G = left[leftlangle frac{partialphi}{partial x_i},frac{partialphi}{partial x_j}rightrangleright]. This generalizes the classical surface integral of a parametrized surface phi:Uto Ssubset mathbb{R}^3 for (x, y)in Usubsetmathbb{R}^2: int_S f dA = iint_U f(phi(x, y)), left|frac{partialphi}{partial x},{times},frac{partialphi}{partial y}right|, dx, dy.
  • If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
  • In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.
  • In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system.
  • Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).
  • In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
  • In machine learning, kernel functions are often represented as Gram matrices.JOURNAL, Lanckriet, G. R. G., N., Cristianini, P., Bartlett, L. E., Ghaoui, M. I., Jordan, Learning the kernel matrix with semidefinite programming, Journal of Machine Learning Research, 5, 2004, 27–72 [p. 29],dl.acm.org/citation.cfm?id=894170, (Also see kernel PCA)
  • Since the Gram matrix over the reals is a symmetric matrix, it is diagonalizable and its eigenvalues are non-negative. The diagonalization of the Gram matrix is the singular value decomposition.

Properties

Positive-semidefiniteness

The Gram matrix is symmetric in the case the real product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:
x^dagger mathbf{G} x =
sum_{i,j}x_i^* x_jleftlangle v_i, v_j rightrangle =
sum_{i,j}leftlangle x_i v_i, x_j v_j rightrangle =
biggllangle sum_i x_i v_i, sum_j x_j v_j biggrrangle =
biggl| sum_i x_i v_i biggr|^2 geq 0 .
The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product.Note that this also shows that the Gramian matrix is positive definite if and only if the vectors v_i are linearly independent (that is, sum_i x_i v_i neq 0 for all x).

Finding a vector realization

{{See also|Positive definite matrix#Decomposition}}Given any positive semidefinite matrix M, one can decompose it as:
M = B^dagger B,
where B^dagger is the conjugate transpose of B (or M = B^textsf{T} B in the real case).Here B is a k times n matrix, where k is the rank of M. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of M.The columns b^{(1)}, dots, b^{(n)} of B can be seen as n vectors in mathbb{C}^k (or k-dimensional Euclidean space mathbb{R}^k, in the real case). Then
M_{ij} = b^{(i)} cdot b^{(j)}
where the dot product a cdot b = sum_{ell=1}^k a_ell^* b_ell is the usual inner product on mathbb{C}^k.Thus a Hermitian matrix M is positive semidefinite if and only if it is the Gram matrix of some vectors b^{(1)}, dots, b^{(n)}. Such vectors are called a vector realization of {{nowrap|M.}} The infinite-dimensional analog of this statement is Mercer’s theorem.

Uniqueness of vector realizations

If M is the Gram matrix of vectors v_1,dots,v_n in mathbb{R}^k then applying any rotation or reflection of mathbb{R}^k (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any k times k orthogonal matrix Q, the Gram matrix of Q v_1,dots, Q v_n is also {{nowrap|M.}}This is the only way in which two real vector realizations of M can differ: the vectors v_1,dots,v_n are unique up to orthogonal transformations. In other words, the dot products v_i cdot v_j and w_i cdot w_j are equal if and only if some rigid transformation of mathbb{R}^k transforms the vectors v_1,dots,v_n to w_1, dots, w_n and 0 to 0.The same holds in the complex case, with unitary transformations in place of orthogonal ones.That is, if the Gram matrix of vectors v_1, dots, v_n is equal to the Gram matrix of vectors w_1, dots, w_n in mathbb{C}^k then there is a unitary k times k matrix U (meaning U^dagger U = I) such that v_i = U w_i for i = 1, dots, n.{{harvtxt|Horn|Johnson|2013}}, p. 452, Theorem 7.3.11

Other properties

  • Because G = G^dagger, it is necessarily the case that G and G^dagger commute. That is, a real or complex Gram matrix G is also a normal matrix.
  • The Gram matrix of any orthonormal basis is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real rotation matrix is the identity matrix. Likewise, the Gram matrix of the rows or columns of a unitary matrix is the identity matrix.
  • The rank of the Gram matrix of vectors in mathbb{R}^k or mathbb{C}^k equals the dimension of the space spanned by these vectors.

Gram determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:bigl|G(v_1, dots, v_n)bigr| = begin{vmatrix}
langle v_1,v_1rangle & langle v_1,v_2rangle &dots & langle v_1,v_nrangle
langle v_2,v_1rangle & langle v_2,v_2rangle &dots & langle v_2,v_nrangle
vdots & vdots & ddots & vdots
langle v_n,v_1rangle & langle v_n,v_2rangle &dots & langle v_n,v_nrangle
end{vmatrix}.If v_1, dots, v_n are vectors in mathbb{R}^m then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When {{nowrap|n > m}} the determinant and volume are zero. When {{nowrap|1=n = m}}, this reduces to the standard theorem that the absolute value of the determinant of n n-dimensional vectors is the n-dimensional volume. The Gram determinant is also useful for computing the volume of the simplex formed by the vectors; its volume is {{math|Volume(parallelotope) / n!}}.The Gram determinant can also be expressed in terms of the exterior product of vectors by
bigl|G(v_1, dots, v_n)bigr| = | v_1 wedge cdots wedge v_n|^2.
When the vectors v_1, ldots, v_n in mathbb{R}^m are defined from the positions of points p_1, ldots, p_n relative to some reference point p_{n+1},
(v_1, v_2, ldots, v_n) = (p_1 - p_{n+1}, p_2 - p_{n+1}, ldots, p_n - p_{n+1}),,
then the Gram determinant can be written as the difference of two Gram determinants,
bigl|G(v_1, dots, v_n)bigr| = bigl|G((p_1, 1), dots, (p_{n+1}, 1))bigr| - bigl|G(p_1, dots, p_{n+1})bigr|,,where each (p_j, 1) is the corresponding point p_j supplemented with the coordinate value of 1 for an (m+1)-st dimension.{{Citation needed|reason=This relation between Gram matrices is apparently true but needs a citation to support its .|date=February 2022}} Note that in the common case that {{math|1=n = m}}, the second term on the right-hand side will be zero.

Constructing an orthonormal basis

Given a set of linearly independent vectors {v_i} with Gram matrix G defined by G_{ij}:= langle v_i,v_jrangle, one can construct an orthonormal basis
u_i := sum_j bigl(G^{-1/2}bigr)_{ji} v_j.
In matrix notation, U = V G^{-1/2} , where U has orthonormal basis vectors {u_i} and the matrix V is composed of the given column vectors {v_i}.The matrix G^{-1/2} is guaranteed to exist. Indeed, G is Hermitian, and so can be decomposed as G=UDU^dagger with U a unitary matrix and D a real diagonal matrix. Additionally, the v_i are linearly independent if and only if G is positive definite, which implies that the diagonal entries of D are positive. G^{-1/2} is therefore uniquely defined by G^{-1/2}:=UD^{-1/2}U^dagger. One can check that these new vectors are orthonormal:
begin{align}
langle u_i,u_j rangle&= sum_{i’} sum_{j’} Bigllangle bigl(G^{-1/2}bigr)_{i’i} v_{i’},bigl(G^{-1/2}bigr)_{j’j} v_{j’} Bigrrangle [10mu]&= sum_{i’} sum_{j’} bigl(G^{-1/2}bigr)_{ii’} G_{i’j’} bigl(G^{-1/2}bigr)_{j’j} [8mu]&= bigl(G^{-1/2} G G^{-1/2}bigr)_{ij} = delta_{ij}end{align}where we used bigl(G^{-1/2}bigr)^dagger=G^{-1/2} .

See also

References

{{reflist}}

External links

{{Matrix classes}}

- content above as imported from Wikipedia
- "Gram matrix" does not exist on GetWiki (yet)
- time: 12:59pm EDT - Wed, May 22 2024
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 21 MAY 2024
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
CONNECT