Digging into the Conjugation in Complex Inner Products
The inconsistency between the inner product of complex spaces and the quadratic forms regarding the positions of conjugate elements.
Table of Contents
There is a phenomenon in linear algebra textbooks that is difficult to understand at first glance.
By common mathematical convention, the standard inner product on an -dimensional complex space is defined as:
And at the same time there is the definition of the Hermitian quadratic forms:
These two definitions, which are supposed to be closely related, take conjugates on different elements — for the inner product, it is the latter, ; for the quadratic form, it is the former, .
It is of course possible to use another set of definitions, where the elements of and both take the conjugate on the other element, and the “inconsistency” seen above is still present.
After wrestling with this strange question for a long time, I had some wonderful thoughts, which are recorded in detail here.
Unless otherwise specified, we establish the following convention:
- , stands for the -th component of the vectors and in , respectively;
- stands for the element at the -th row and the -th column of the positive definite Hermitian matrix .
The Simple Explanation
In fact, , and ; hence they are both specialisations of the generalised quadratic form .
As for the difference in the “position” of the conjugation, it is merely an illusion created by the way it is written. It is easy to see this in the following form.
(where the Kronecker function equals the element at the -th row and the -th column of the identity matrix .)
The Detailed Explanation
The form is not randomly chosen. There is a more fundamental reason behind the mathematicians’ preference for it. [The physics convention seems to be the opposite, but the reason for this (the Dirac notation of quantum mechanics) is irrelevant to our topic here.]
Why Is There the Conjugate and Conjugate Symmetry?
First, why is there a conjugate in the standard inner product?
Looking back at the dot product on real space, it is essentially a generalisation of the Euclidean norm to the case of two vectors.
On the complex space there needs to be a similar operation, satisfying positive definiteness . In the case of one dimension, the modulus of the complex number is clearly a first choice. Generalising it to dimensions, the definition arises. The conjugation comes from here.
From another perspective, why does the inner product satisfy the conjugate symmetry , rather than the plain symmetry ?
Let us return to the standard inner product and try to separate the real part of the expression from the imaginary part.
It can be observed that the real part is equivalent to the sum of dot products on , while the imaginary part equals the sum of cross products on , giving the “rotation angle in the -dimensional complex space from to ” — for instance, when the arguments (polar angles) of each component are equal for both vectors, ; when the argument of the component of is that of the component of rotated by degrees for each component, reaches its maximal possible value .
And in the more general definition of the inner product, it is natural to expect the real and imaginary parts of the result to have corresponding properties respectively. Conjugate symmetry is derived from the anticommutative property of the cross product, or more generally, the anticommutative property for “rotation angles in complex spaces”.
In this way, the property of the conjugate is indeed excellent.
Why the Conjugate Is Taken on the Second Vector?
To solve this problem, one has to explore another layer of the nature of inner products, and it is never wrong to start with the simplest dot product. What is the nature of the dot product?
We can of course say that it represents “the projection of on , multiplied by the length of ”, but this is not deep enough.
3Blue1Brown explained in depth in the video🪐 about a viewpoint: the dot product is the application of a linear transformation on defined by to . This transformation turns any vector into a scalar value .
From this perspective, an inner product function and a vector together also determines a transformation. That is to say, an “inner product” is an operator that maps vectors onto linear transformations. In a more abstractive manner, this is an instance of currying: a binary function can be seen as an operator that maps an argument to a unary function, .
A positive definite Hermitian matrix defines such an operator, which maps a vector onto a transformation based on the standard basis . Applying the transformation to results in the previously seen form of . This is actually a general form of inner products on (the Hermitian form).
Hence, the reason why the conjugate is taken for is because we expect that , as the element being transformed, keep its original form, and all computations be put inside the linear transformation . As for why is written before , it’s probably because it’s more intuitive to first specify the element being transformed and then specify the transformation in this binary operation.