Nambu-Goldstone modes in the random phase approximation

I show that the kernel of the random phase approximation (RPA) matrix based on a stable Hartree, Hartree-Fock, Hartree-Bogolyubov or Hartree-Fock-Bogolyubov mean field solution is decomposed into a subspace with a basis whose vectors are associated, in the equivalent formalism of a classical Hamiltonian homogeneous of second degree in canonical coordinates, with conjugate momenta of cyclic coordinates (Nambu-Goldstone modes) and a subspace with a basis whose vectors are associated with pairs of a coordinate and its conjugate momentum neither of which enters the Hamiltonian at all. In a subspace complementary to the one spanned by all these coordinates including the conjugate coordinates of the Nambu-Goldstone momenta, the RPA matrix behaves as in the case of a zerodimensional kernel. This result was derived very recently by Nakada as a corollary to a general analysis of RPA matrices based on both stable and unstable mean field solutions. The present proof does not rest on Nakada's general results.


Introduction
The random phase approximation (RPA) [1] is ubiquituous in many fields of physics including nuclear physics, and it is described in textbooks such as the much cited monograph by Ring and Schuck [2]. Formally it leads to the analysis of a matrix where A and B are n × n matrices and A is Hermitian and B symmetric so that S is Hermitian. The matrix M is the RPA matrix and S is the stability matrix. When M is constructed from excitations of a Hartree, Hartree-Fock, Hartree-Bogolyubov or Hartree-Fock-Bogolyubov selfconsistent mean field solution, S is the Hessian matrix of the mean field energy with respect to variations about self-consistency [3]. This mathematical problem is well analysed when S is positive definite [2,3]. Then M has 2n linearly independent eigenvectors x j . The corresponding eigenvalues ω j form pairs of opposite nonvanishing reals and the eigenvectors can be so normalised that x † j 1 0 0 −1 x k = (sign ω j )δ jk .
It often occurs, however, that S is only positive semidefinite. Specifically, this is the case when the mean field solution violates some continuous symmetry of the many-body Hamiltonian, such as translational or rotational invariance, because the mean field solution is then invariant to transformations within the symmetry group. This leads to vanishing eigenvalues of M, and the number of linearly independent eigenvectors is generally less than 2n. It is usually assumed that the eigenvectors corresponding to such vanishing eigenvalues can be interpreted as associated, in the language of classical analytic mechanics, with generalised momenta whose conjugate coordinates describe local variations within the symmetry group. These pairs of a cyclic coordinate and its conjugate momentum form the so-called Nambu-Goldstone modes [3,4]. However, it was not, to my knowledge, until very recently proved that this interpretation is always consistent with the structure of M.
This situation changed due to work by Nakada, who presented an extensive analysis of M in the most general case when no definiteness of S at all is assumed [5]. In an addendum to this work Nakada derives from his general formalism that when S is positive semidefinite then the space acted on by M is decomposed into three subspaces: one where the vectors of a certain basis correspond to pairs of a coordinate and its conjugate momentum that do not enter the Hamiltonian at all, one where they form Nambu-Goldstone mode pairs and one where M acts as in the case of a positive definite S [6]. I here give a proof of this result which does not rest on Nakada's general formalism.
2 Change of basis A unitary transformation gives Because S ′ is a real, symmetric matrix, a further real, orthogonal transformation maps it to a real, positive semidefinite, diagonal matrix ∆. Applying both transformations successively results in where N ′′ is imaginary and antisymmetric and obeys N ′′2 = 1. Conversely any such matrix 3 Onedimensional kernel of the stability matrix First assume for simplicity that ∆'s kernel is onedimensional and its first diagonal element is zero while the rest are positive. Let Due to the antisymmetry of N the real vector −iNx (1) belongs to the space orthogonal to x (1) , where orthogonality x ⊥ y is defined by x T y = 0. It does not vanish because N 2 = 1.
With ∆ −1 denoting the diagonal matrix with first diagonal element zero and the reciprocals of ∆'s diagonal elements in the following positions (so it is not in a strict sense ∆'s inverse, which does not exist), let with an imaginary normalisation factor a. Then When a is chosen such that we have x ( I call the space of such vectors the residual space. Being the orthogonal complement of span (−iNx (j) , j = 1, 2), which is twodimentional because −iN is orthogonal and x (1) and x (2) are linearly independent by mutual orthogonality, the residual space has dimension 2n − 2. The relations (3) are easily seen to imply where N k· and N ·k denote the kth row and column vectors. Because the coefficient of x 1 in this equation is positive the equation can be satified for any x k , k = 2, . . . , 2n, by adjustment of A basis for the residual space is thus obtained by selecting a basis for the space orthogonal to x (1) and −iNx (1) and supplementing each basic vector by the first coordinate required by equation (4).
Now let e (j) , j = 1, . . . , 2n, be a real, orthonormal basis for the total space such that and let, for j = 3, . . . , 2n, the vector x (j) be obtained by supplementing e (j) by a first coordinate such as to satisfy equation (4). As these vectors are linearly independent, they span the residual space. Moreover, because performs the identity transformation within this space. In the residual space the transformation M therefore has matrix elements where the reduction of the sum follows from = e (j)T N 2 e (k) = e (j)T e (k) = δ jk , j, k ≥ 3.
As the matrix (e (j)T ∆e (k) , j, k ≥ 3) is positive definite and the matrix (e (j)T Ne (k) , j, k ≥ 3) is imaginary and antisymmetric, the restriction of M to the residual space is thus similar to a matrix of the form (1) with a positive definite ∆. Moreover, because by the equations (5)

Multidimensional kernel
For a generalisation to the case when ∆'s kernel K has dimension m greater than one, let e (j) , j = 1 . . . m, be orthonormal basic vectors for K such that e (j) , j = m ′ + 1, . . . m, span K ∩ iNK ⊥ . Assume m ′ + 1 ≤ j ≤ m. Then e (j) ∈ iNK ⊥ implies e (j+m−m ′ ) := −iNe (j) ∈ K ⊥ , and because −iN is orthogonal, the latter vectors are orthonormal. The matrix (e (j)T Ne (k) , j, k ≤ m ′ ) is nonsingular. In fact, if for some linear combination x of e (j) , j = 1, . . . , m ′ , the vector −iNx would be perpendicular to all e (k) , k = 1, . . . , m ′ , then because, by x ⊥ −e (k+m−m ′ ) = iNe (k) and the orthogonality of −iN, the vector −iNx is also perpendicular to all e (k) , k = m ′ + 1, . . . , m, it would be perpendicular to K. But then x would belong to iNK ⊥ , a contradiction. As (e (j)T Ne (k) , j, k ≤ m ′ ) is also imaginary and antisymmetric, it follows that m ′ is even. The square of (e (j)T Ne (k) , j, k ≤ m ′ ) is not necessarily the unit matrix, but (e (j)T Ne (k) , j, k ≤ m ′ ) can be given this property by right and left multiplications by a nonsingular square matrix and its transposed. This allows defining a basis for span (e (j) , j = 1, . . . , m ′ ) whose vectors are associated, in the manner detailed above, with pairs of a coordinate and its conjugate momentum obeying the canonical Poisson bracket relations (including vanishing of the Poisson brackets between coordinates and momenta belonging to different pairs). These canonical coordinates have vanishing Poisson brackets with the Hamiltonian, so as parts of a complete set of pairs of a coordinate and its conjugate momentum obeying the canonical Poisson bracket relations they will be entirely absent from the Hamiltonian. Let x (j) = e (j) , j = 1, . . . , m, with ∆ −1 defined in the way analogous to that above. These vectors are linearly independent because ∆ −1 is nonsingular in K ⊥ . Like before the vectors x (k) , k = m ′ + 1, . . . , 2m − m ′ span a subspace invariant to iM. Notice Because the matrix m of elements is positive definite, one can make a transformation among the vectors x (j) , j = m ′ + 1, . . . , m, to get The transformation then gives without destroying the relations (7). After these transformations one has where the matrix ia with elements ia jk is the inverse of the matrix m defined by equation (6) before the transformations. As ia is symmetric and positive definite, its appearance in equation (8) is rendered positive diagonal by application after the transformations of one more orthogonal transformation simultaneously to both sets of vectors x (j) and Due to performs the identity transformation within the residual space, so the restriction of M to this space has matrix elements where the reduction of the sum follows from e (l)T ∆e (k) = 0, l = 1, . . . , m, Because the vectors ∆x (j) , j = 2m − m ′ + 1, . . . , 2n, are linearly independent and map to members of the residual space by the orthogonal matrix iN, the restriction of M to the residual space is nonsingular. So is then the matrix (e (j)T Ne (k) , j, k ≥ 2m − m ′ + 1) by equation (9). This matrix is also imaginary and antisymmetric. Its square is not necessarily the unit matrix, but (e (j)T Ne (k) , j, k ≥ 2m − m ′ + 1) can be given this property by right and left multiplications by a nonsingular matrix T and its transposed T T . Right and left multiplications of the matrix (e (j)T ∆e (k) , j, k ≥ 2m − m ′ + 1) by (T −1 ) T and T −1 then result in a similarity transformation of the matrix (e (j)T Mx (k) , j, k ≥ 2m − m ′ + 1). As (e (j)T ∆e (k) , j, k ≥ 2m − m ′ + 1) is symmetric and positive definite and this property is conserved by the right and left multiplications by (T −1 ) T and T −1 , the restriction of M to the residual space is thus similar to a matrix of the form (1) with a positive definite ∆.

Conclusion
It was shown that when the random phase approximation (RPA) stability matrix is positive semidefinite, the vector space on which it acts can be decomposed into three parts: one where the vectors of a certain basis correspond, in the equivalent formalism of a classical Hamiltonian homogeneous of second degree in canonical coordinates, to pairs of a coordinate and its conjugate momentum that do not enter the Hamiltonian at all, one where they correspond to pairs of a cyclic coordinate and its conjugate momentum (Nambu-Goldstone modes) and a residual space where the RPA matrix acts as in the case of a positive definite stability matrix. This was also proved very recently by Nakada as a corollary to a general analysis of the most general RPA matrix without limitations on the definiteness of the stability matrix. The present proof does not rest on Nakada's general results.