Nambu–Goldstone modes in the random phase approximation

with conjugate momenta of cyclic coordinates (Nambu–Goldstone modes) and a subspace with a basis whose vectors are associated with pairs of a coordinate and its conjugate momentum neither of which enters the Hamiltonian at all. In a subspace complementary to the one spanned by all these coordinates including the conjugate coordinates of the Nambu–Goldstone momenta, the RPA matrix behaves as in the case of a zero-dimensional kernel. This result was derived very recently by Nakada as a corollary to a general analysis of RPA matrices based on both stable and unstable mean ﬁeld solutions. The present proof does not rest on Nakada’s general results....................................................................................................................


Introduction
The random phase approximation (RPA) [1] is ubiquituous in many fields of physics, including nuclear physics, and it is described in textbooks such as the much cited monograph by Ring and Schuck [2]. Formally, it leads to the analysis of a matrix where A and B are n × n matrices, and A is Hermitian and B symmetric so that S is Hermitian. The matrix M is the RPA matrix and S is the stability matrix. When M is constructed from excitations of a Hartree, Hartree-Fock, Hartree-Bogolyubov, or Hartree-Fock-Bogolyubov self-consistent mean field solution, S is the Hessian matrix of the mean field energy with respect to variations about self-consistency [3,4]. This mathematical problem is well analyzed when S is positive definite [2][3][4]. Then M has 2n linearly independent eigenvectors x j . The corresponding eigenvalues ω j form pairs It often occurs, however, that S is only positive semidefinite. Specifically, this is the case when the mean field solution violates some continuous symmetry of the many-body Hamiltonian, such as translational or rotational invariance, because the mean field solution is then invariant to transformations within the symmetry group. This leads to vanishing eigenvalues of M, and the number of linearly independent eigenvectors is generally less than 2n. It is usually assumed that the eigenvectors corresponding to such vanishing eigenvalues can be interpreted as associated, in the language of classical analytic mechanics, with generalized momenta whose conjugate coordinates describe local variations within the symmetry group. These pairs of a cyclic coordinate and its conjugate momentum form the so-called Nambu-Goldstone modes [3][4][5][6]. However, it was not proved, to my knowledge, until very recently that this interpretation is always consistent with the structure of M.
This situation changed due to work by Nakada, who presented an extensive analysis of M in the most general case when no definiteness of S at all is assumed [7]. In an addendum to this work Nakada derives from his general formalism that when S is positive semidefinite then the space acted on by M is decomposed into three subspaces: one where the vectors of a certain basis correspond to pairs of a coordinate and its conjugate momentum that do not enter the Hamiltonian at all, one where they form Nambu-Goldstone mode pairs, and one where M acts as in the case of a positive definite S [8]. I give here a proof of this result which does not rest on Nakada's general formalism.

Change of basis
A unitary transformation gives Because S is a real, symmetric matrix, a further real, orthogonal transformation maps it to a real, positive semidefinite, diagonal matrix . Applying both transformations successively results in where N is imaginary and antisymmetric and obeys N 2 = 1. Conversely, any such matrix N is mapped by inverses of transformations of the above forms to

One-dimensional kernel of the stability matrix
First assume for simplicity that 's kernel is one-dimensional, and htat its first diagonal element is zero while the rest are positive. Let Due to the antisymmetry of N the real vector −iNx (1) belongs to the space orthogonal to x (1) , where orthogonality x ⊥ y is defined by x T y = 0. It does not vanish because N 2 = 1. With −1 denoting the diagonal matrix with first diagonal element zero and the reciprocals of 's diagonal elements in the following positions (so it is not in a strict sense 's inverse, which does not exist), let with an imaginary normalization factor a. Then When a is chosen such that we have x  (1) belongs to the kernel of , the Poisson bracket of the Hamiltonian with p vanishes, so q is a cyclic coordinate. The condition (2) renders a negative imaginary corresponding to a positive mass. More precisely, ia is the reciprocal mass.
A canonical coordinate r has vanishing Poisson brackets with these two if and only if the vector x with coordinates x i = {r i , r} satisfies I call the space of such vectors the residual space. Being the orthogonal complement of span (−iNx (j) , j = 1, 2), which is two-dimensional because −iN is orthogonal and x (1) and x (2) are linearly independent by mutual orthogonality, the residual space has dimension 2n − 2. The relations (3) are easily seen to imply so the residual space is invariant to iM. For j = 1 the relation (3) requires that x is orthogonal to −iNx (1) . For j = 2 it is equivalent to where N k· and N ·k denote the kth row and column vectors. Because the coefficient of x 1 in this equation is positive, the equation can be satified for any x k , k = 2, . . . , 2n, by adjustment of x 1 . A basis for the residual space is thus obtained by selecting a basis for the space orthogonal to x (1) and −iNx (1) and supplementing each basic vector by the first coordinate required by Eq. (4). Now let e (j) , j = 1, . . . , 2n, be a real, orthonormal basis for the total space such that (1) , and let, for j = 3, . . . , 2n, the vector x (j) be obtained by supplementing e (j) by a first coordinate such as to satisfy Eq. (4). As these vectors are linearly independent, they span the residual space. Moreover, because These relations also give l≥3 As the matrix (e (j)T e (k) , j, k ≥ 3) is positive definite and the matrix (e (j)T Ne (k) , j, k ≥ 3) is imaginary and antisymmetric, the restriction of M to the residual space is thus similar to a matrix of the form (1) with a positive definite . Moreover, because

Multidimensional kernel
For a generalization to the case when 's kernel K has dimension m greater than one, let e (j) , j = 1, . . . , m, be orthonormal basic vectors for K such that e (j) , j = m + 1, . . . , m, span K ∩ iNK ⊥ . Assume m + 1 ≤ j ≤ m. Then e (j) ∈ iNK ⊥ implies e (j+m−m ) := −iNe (j) ∈ K ⊥ , and because −iN is orthogonal, the latter vectors are orthonormal. The matrix (e (j)T Ne (k) , j, k ≤ m ) is nonsingular. In fact, if for some linear combination x of e (j) , j = 1, . . . , m , the vector −iNx would be perpendicular to all e (k) , k = 1, . . . , m , then because, by x ⊥ −e (k+m−m ) = iNe (k) and the orthogonality of −iN, the vector −iNx is also perpendicular to all e (k) , k = m + 1, . . . , m, it would be perpendicular to K.
But then x would belong to iNK ⊥ , a contradiction. As (e (j)T Ne (k) , j, k ≤ m ) is also imaginary and antisymmetric, it follows that m is even. The square of (e (j)T Ne (k) , j, k ≤ m ) is not necessarily the unit matrix, but (e (j)T Ne (k) , j, k ≤ m ) can be given this property by right and left multiplications by a nonsingular square matrix and its transpose. This allows defining a basis for span (e (j) , j = 1, . . . , m ) whose vectors are associated, in the manner detailed above, with pairs of a coordinate and its conjugate momentum obeying the canonical Poisson bracket relations (including vanishing of the Poisson brackets between coordinates and momenta belonging to different pairs). These canonical coordinates have vanishing Poisson brackets with the Hamiltonian, so as parts of a complete set of pairs of a coordinate and its conjugate momentum obeying the canonical Poisson bracket relations they will be entirely absent from the Hamiltonian. Let with −1 defined in the way analogous to that above. These vectors are linearly independent because −1 is nonsingular in K ⊥ . Like before, the vectors x (k) , k = m + 1, . . . , 2m − m , span a subspace invariant to iM. Notice that Because the matrix m of elements is positive definite, one can make a transformation among the vectors x (j) , j = m + 1, . . . , m, to get The transformation then gives without destroying the relations (7). After these transformations one has where the matrix ia with elements ia jk is the inverse of the matrix m defined by Eq. (6) before the transformations. As ia is symmetric and positive definite, its appearance in Eq. (8) is rendered positive diagonal by application after the transformations of one more orthogonal transformation simultaneously to both sets of vectors x (j) and x (j+m−m ) , j = m + 1, . . . , m. This does not change Due to x (j) e (j)T performs the identity transformation within the residual space, so the restriction of M to this space has matrix elements where the reduction of the sum follows from = e (j)T N 2 e (k) = e (j)T e (k) = δ jk , j, k ≥ 2m + 1.
The transformations by T are then not required. Also,

Conclusion
It was shown that when the random phase approximation stability matrix is positive semidefinite, the vector space on which it acts can be decomposed into three parts: one where the vectors of a certain basis correspond, in the equivalent formalism of a classical Hamiltonian homogeneous of second degree in canonical coordinates, to pairs of a coordinate and its conjugate momentum that do not enter the Hamiltonian at all, one where they correspond to pairs of a cyclic coordinate and its conjugate momentum (Nambu-Goldstone modes), and a residual space where the RPA matrix acts as in the case of a positive definite stability matrix. This was also proved very recently by Nakada as a corollary to a general analysis of the most general RPA matrix without limitations on the definiteness of the stability matrix. The present proof does not rest on Nakada's general results.