Exhaustive derivation of static self-consistent multi-soliton solutions in the matrix Bogoliubov-de Gennes systems

The matrix-generalized Bogoliubov-de Gennes systems have recently been considered by the present author [arXiv:1509.04242, Phys. Rev. B 93, 024512 (2016)], and time-dependent and self-consistent multi-soliton solutions have been constructed based on the ansatz method. In this paper, restricting the problem to the static case, we exhaustively determine the self-consistent solutions using the inverse scattering theory. Solving the gap equation, we rigorously prove that the self-consistent potential must be reflectionless. As a supplementary topic, we elucidate the relation between the stationary self-consistent potentials and the soliton solutions in the matrix nonlinear Schr\"odinger equation. Asymptotic formulae of multi-soliton solutions for sufficiently isolated solitons are also presented.

In Ref. [25], the present author generalized the BdG model to the multicomponent case. Here we call it the matrix BdG model. The matrix BdG model includes examples of unconventional superconductors such as triplet p-wave superfluids/superconductors in condensed matter and S U(n)symmetric fermions in ultracold atoms (see, e.g., Refs. [26][27][28][29][30][31][32] and references therein). The author constructed the time-dependent soliton solutions based on the ansatz originating from the Gelfand-Levitan-Marchenko (GLM) equation in the inverse scattering theory (IST). In particular the 2 × 2 case, which describes a mixture of triplet p-wave and singlet s-wave superconductors/superfluids, is investigated in detail and various examples of multi-soliton dynamics are demonstrated. The related problem in high-energy physics has been considered more recently [33].
In this paper, we attack the problem from another direction. Restricting the problem to the static one, we exhaustively solve it without relying on an ansatz. Using the IST, we take into account all kinds of potentials having nonzero reflection coefficients, and we prove rigorously that the selfconsistent solution must be reflectionless. The relation between this paper and Ref. [25] is depicted in Fig. 1.
In fact, the time-dependent multi-component solutions reported in Ref. [25] were obtained by "extrapolating" the static multi-component solutions given in the present paper and the timedependent one-component solutions given in Refs. [21][22][23]. We believe that the mathematical expressions and techniques provided in this work will offer an insight into the most general timedependent and finite-reflection solutions shown in Fig. 1.
Here we explain the organization of this paper with a compact summary of each section. Sections 2 and 3 are devoted to the main topic of this paper. In Sec. 2, we formulate the IST of the matrix Zakharov-Shabat (ZS) operator with finite-density boundary condition. While the same problem is also considered in the context of solving the matrix nonlinear Schrödinger (NLS) equation [34], in order to solve the gap equation, we need several new additional formulae and techniques, such as the treatment of degenerate eigenvalues and the expression of corresponding bound states (Subsec. 2.7), the orthogonal and completeness relations (Subsecs. 2.10-2.12), and an inner-product formula for a higher-order product (Appendix C). In Sec. 3, we define the gap equations and solve them. We prove that the self-consistent solution must have a reflectionless potential, and the filling rates of bound states become the same as Ref. [25]. Sections 4 and 5 provide additional but important topics. In Sec. 4, we present an asymptotic formula for the reflectionless solution, i.e., the n-soliton solution, when solitons are well isolated. In Sec. 5, we clarify that the stationary-class potentials are snapshots of each time of the soliton solutions of the matrix NLS equation, as described in Sec. II D of Ref. [25]. Section 6 gives a summary. Appendices A-C provide a few mathematical proofs and formulae to support the mathematical rigor.

Inverse scattering theory 2.1. Matrix Zakharov-Shabat eigenvalue problem with unitary background
We use the following notations for the matrix-generalized Pauli matrices: where I d is a d × d unit matrix. Let ∆(x) be a d × d matrix. We consider the following matrix-generalized ZS eigenvalue problem: We sometimes write w = ( u v ), where u and v are d-component vectors. Here, we assume that the asymptotic form of ∆(x) is given by ∆(x → ±∞) = m∆ ± , m > 0, ∆ ± : unitary. (2.3) This means that the singular values of ∆(x) at infinity are all m. If we consider the application to physical systems for the d = 2 case, the unitary background means that the phase of the background order parameter (e.g., spin-1 Bose condensates [36] for a bosonic example or superfluid helium 3 [25] for a fermionic example) is non-magnetic.  Fig. 1 The relation between Ref. [25] and this paper. The two works have a common subset, i.e., the time-independent and reflectionless solution. It must be emphasized that, though we fully consider potentials possessing finite reflection coefficients, we eventually conclude that the self-consistent potential must be reflectionless (Sec. 3). As for the one-component problem, the time-independent and reflectionless case was solved in Ref. [20], the time-independent and finitereflection case was treated in Ref. [35] (several earlier discussions of the reflectionless nature can also be found in [10,11,14]), and time-dependent reflectionless solutions were constructed in Refs. [21][22][23].

Symmetric and antisymmetric cases and complex conjugate solution
In addition to the general non-symmetric ∆(x), we also consider the special cases where ∆(x) has the symmetry (2.5) When we write down expressions valid only under these symmetries, we always write "(if L = τL * τ † )". Note that ∆ ± = D 1 ∆ ± D † 2 also holds, since the relation must be satisfied also for x = ±∞. Since the above relation can be rewritten as ∆(x) = D T 2 ∆(x) T D * 1 , the following relation holds: In particular, the symmetric or antisymmetric cases ∆(x) = ±∆(x) T are realized by If we consider the physical problem of fermionic superfluids or superconductors, the antisymmetric case describes s-wave superfluids, and the symmetric case describes p-wave superfluids [25]. Here, 3/32 we note that the differential operators in the off-diagonal terms of the p-wave BdG equation, having a form like {−i∂, ∆}, are now replaced by k F by the Andreev approximation, i.e., the dispersion linearization around the Fermi point. In this treatment, p-wave order parameters are simply described by a symmetric matrix without differential operators. A new dimensionless unit with k F = 1 is taken after this approximation. See also Sec. IV of Ref. [25] for details of the formulation.

Uniformization variable
The uniformization variable (Zukowsky transform [37]) is defined in the same way as before [20,25]: By this parametrization, the dispersion relation for the uniform system ǫ 2 = k 2 + m 2 holds automatically. As explained in Ref. [20]

Constants and orthogonal relations
Let f 1 , . . . , f 2d be solutions of Eq. (2.2) for the same ǫ. Then, the Wronskian defined by is a constant, i.e., W does not depend on x. In particular, W 0 means that f 1 , . . . , f 2d are linearly independent; thus, they become one basis for the solution space of Eq. (2.2) for given ǫ. Let w i = w i (x, s i ) be a solution of Eq. (2.2) with ǫ i = ǫ(s i ), and assume that it is analytic with respect to s i in some subset of C. Then, w i (x, s i ) † is an analytic function of s * i . Henceforth we show the x-derivative of f by a subscript f x and the s-derivative by dotḟ . From Eq. (2.2), we soon find which represents the orthogonality of eigenstates between different eigenvalues. In particular, when The constancy of J is used to define the transmission and reflection coefficients (Subsec. 2.6). Differentiating Eq. (2.12) by s j and setting ǫ * i = ǫ j , (2.14) From Eq. (2.12), we also obtain

Solutions for uniform systems
Let us consider the uniform ∆(x) = m∆ ± in Eq. (2.2). For given ǫ = ǫ(s) (s ±1), the linearly independent 2d eigenstates are given by Each column of Φ satisfies Eq. (2.2); thus, the above provides 2d solutions. We also define the 2d × 2d matrix solution by M ± is both unitary and hermitian M ± = M † ± = M −1 ± and anticommutes with σ 3 : {M ± , σ 3 } = 0. By definition, the following relation holds: The constants W and J introduced in Subsec 2.4 are calculated as The complex conjugation relation for symmetric/antisymmetric cases (L = τL * τ † ) is given by

Jost functions and scattering matrix
Henceforth, we consider non-uniform ∆(x) with the boundary condition Eq. (2.3). We define the right (+) and left (−) Jost functions F ± (x, s) by the solution of Eq. (2.2) with the following asymptotic behavior: We also define the 2d × 2d Jost function by Y ± (x, s) := F ± (x, s)∆ † ± , sF ± (x, s −1 ) , which has the asymptotic form (2.24) Since a Jost function satisfying a given asymptotic form at x = +∞ or −∞ is uniquely fixed (due to the uniqueness of the solution of the linear differential equation for a given initial condition), the Jost functions with the same asymptotic form must be identified as the same one. Therefore, from 5/32 Eqs. (2.19) and (2.22), we obtain We introduce a 2d × 2d scattering matrix S (s) by Then, by definition, the asymptotic form of Y + (x, s) is given by: Therefore, S (s) has the following form: The former relation suggests that T (s) and R(s) are regarded as transmission and reflection coefficients. In fact, the asymptotic relation (2.28) is rewritten as  Next, let us consider the complex conjugation relations of the scattering matrix S (s) when the symmetric or antisymmetric relation L = τL * τ † holds. We find Here we have used τ † = ±τ * [Eq. (2.6)]. This means that S (s) ∈ O(τ † σ 3 ), where O(σ) denotes a generalized orthogonal group whose element X satisfies X T σX = σ.

Bound states
In this subsection, we summarize the concept of bound states and zeros of the scattering matrix. independent vectors c 1 , . . . , c r such that A(s j )c r = 0, and hence we have r bound states. As we will prove later, s j is the zero of det A(s) of order r, which means that the zeros of matrix eigenvalues of A(s) are all simple. We want to prepare a matrix C j of rank r including all r column vectors of the null space of A(s j ), which satisfies A(s j )C j = 0. If such C j is found, the right Jost function (2.39) with s = s j multiplied by C j , i.e., F + (x, s j )∆ † + C j , includes all r linearly independent bound states of eigenvalue ǫ(s j ) in its d columns. Such C j can be constructed as follows: where δ > 0 is taken to be sufficiently small to exclude other discrete eigenvalues (See Fig. 2). Below, we explain that C j defined by Eq. (2.48) has a desired property for the above-mentioned purpose. 1 By the singular-value decomposition (SVD) theorem, there exist s-independent unitary matrices U and V such that A(s) in the vicinity of s = s j can be expressed as where O r is a zero matrix of size r, and Λ is a (d − r) × (d − r) diagonal matrix whose diagonal elements are real and positive by assumption.Λ is an r × r diagonal matrix whose diagonal elements are real and non-negative, which is in fact proved to be positive later (footnote 2). Therefore both Λ andΛ are invertible. The asterisk represents some nonzero matrix not important in the current problem. The existence of the expression (2.49) is shown as follows. First, make an SVD of the zeroth order, i.e., A(s j ). Next, using an arbitrariness of the choice of U and V originating from the zero matrix O r , the top-left d × d submatrix of the first-order coefficient matrix can be also transformed into an SVD form, which isΛ. Thus we obtain (2.49).
If we assume the invertibility ofΛ, which will be justified later in footnote 2, the expansion of A(s) −1 is given by Integrating this expression on the contour |s − s j | = δ yields which obviously satisfies Since rank C j = r, the column vectors in C j span the null space of A(s j ), namely, if we write C j = (c j1 , . . . , c jd ), then span{c j1 , . . . , c jd } = Ker A(s j ).
Here we derive a few expressions necessary in the next subsection to calculate a normalization coefficient of bound states. Multiplying Eq. (2.34) by C † j from the left and substituting s = s * whereB is an r × r matrix and O d−r is a zero matrix of size d − r. Using Eqs. (2.49) and (2.51), Combining Eqs. (2.54) and (2.55) gives Substituting s = s j in this expression, and multiplying C j from right yields: where we define rank-r d × r matrices H j (x) and P j by . . , h jr (x) are orthonormalized bound states of eigenvalue ǫ(s j ). The factor − 2is j m is introduced just for later convenience. P j is a coefficient matrix of normalized bound states. Using an arbitrariness of the definition of H j of unitary transformation H j → H j U, we can always choose P j such that P † j P j is diagonal, and such P j is already chosen in Eq. (2.60). In such a choice, the coefficient vectors becomes orthogonalp † jip jl = δ il and the spectral decomposition of P j P † j is given by Let us derive another expression for this integral. The asymptotic forms of the Jost function and its s-derivative (denoted by dot) are given by Thus, from Eqs. (2.61) and (2.64), Since P j has rank r, there is a matrix X such that P j X = I r . Therefore, Eqs. (2.58) and (2.65) imply holds for both symmetric and antisymmetric cases. From this, we soon find that ifp ji is an eigenvector with eigenvalue c 2 ji , then ∆ −p * ji is also an eigenvector with the same eigenvalue: We can verify thatp ji and ∆ −p * can be chosen to be a real vector. On the other hand, if ∆(x) is antisymmetric, two eigenvectorsp ji and ∆ −p * ji always emerge in pairs. Thus, the degeneracy of eigenvalues of bound states is always even. 2 Here, we provide a proof that the diagonal elements ofΛ in Eq. (2.49) are all positive. As an example, let us assumeΛ 11 = 0. We modify the definition of C j in Eq. (2.51) by replacing the first diagonal element by 1. Then, A(s j )C j = C j A(s j ) = 0 and rank C j = r, and hence this modified C j can also extract all r independent bound states. Equation (2.54) is also unchanged. Note thatB in Eq. (2.54) must have rank r, since the asymptotic forms of Eq. (2.62) at x = +∞ and x = −∞ must have the same rank (because there must exist the same number of linearly independent solutions for given r linearly independent initial conditions.). On the other hand, in Eq. (2.55), I r should be replaced by 0 ⊕ I r−1 sinceΛ 11 = 0. Therefore, Eq. (2.56) does not hold and we conclude that rank

Integral representation of Jost function
Now we introduce an integral representation for left Jost functions using the kernel K(x, y): where K(x, y) is assumed to decrease exponentially for y → −∞. This representation is allowed for s such that the integrand does not diverge at y = −∞. Such a region at least includes Im s ≤ 0 as a subset. Replacing s → s −1 , we obtain another representation which is valid at least for Im s ≥ 0. Equations (2.70) and (2.71) are collectively written as However, we should keep in mind that the right d columns and the left d columns in Eq. (2.72) have different regions where the integral representation is allowed. When s ∈ R, all columns allow the integral representation. The kernel K satisfies two important equations. To write down this, we write the matrix ZS operator for uniform and non-uniform ∆ as We also define Here, the operation of L (0) from right is defined by f L 0 (y) := i∂ y f σ 3 + f mM − . The derivation is as follows. Note that Y − and Ψ − satisfy Multiplying ǫ to Eq. (2.72) and integration by part yields Equating these two, which holds for all s ∈ R, yields (2.75) and (2.76).
When ∆(x) is symmetric or antisymmetric, the complex conjugation relations are given by The existence of the solution of the differential equation (2.76), which guarantees the use of the integral representation (2.70), is shown in Appendix A.
Equation (2.75) is used to express ∆ by K: Equation (2.76) implies that the kernel K(x, y) must have the following expression: where f (x, ǫ) is an eigenstate of L(x) with eigenvalue ǫ, and φ(y, ǫ * ) is an eigenstate of L (0) (y) with eigenvalue ǫ * , and the summation for ǫ may include both discrete and continuous one (i.e., an 11/32 integral). The boundedness of K(x, y) with respect to x in fact suggests that the range of summation is restricted to ǫ ∈ R. The expression of K in the form (2.78) will be given in Subsec. 2.10.

Gelfand-Levitan-Marchenko equation
In this subsection, we derive the GLM equation. From the integral representation introduced above and Eq. (2.57), First, we evaluate the right-hand side. The completeness relation in the uniform system ∆(x) = m∆ − is given by Using this, if we define We consider the semicircular contour in the upper half-plane (Fig. 2). The contribution from the arc vanishes in the infinite-radius limit, and we obtain  Summarizing, we obtain the GLM equation given as follows: Ω(x, y) = Ω sc (x, y) + Ω bd (x, y). (2.88) The subscripts "sc" and "bd" represent the contributions from scattering and bound states, respectively. The equation is derived under the assumption y < x, while we need the case y = x when we calculate the potential [Eq. (2.77)]. It should be interpreted as a limit y → x − 0. Henceforth, we relabel the discrete eigenvalues by distinguishing all multiple zeros, and the corresponding bound states are also relabeled in the same manner. For example, if there are triply degenerate s 1 , simple s 2 , and doubly degenerate s 3 , we relabel them as follows: where n is a total number of bound states, andp i 's satisfŷ  We must carefully change the order of the ζ-integral and the y-integral when we substitute the above kernel into the integral representation of the Jost function [Eqs. (2.70) and (2.71)], since the integrand is not L 1 -integrable. In order to apply the Fubini-Tonelli theorem, we must first consider ∞+iδ −∞+iδ dζ, and take the limit δ → +0 after changing the order of the integral. In order to perform such calculations quickly, it is convenient to always add +i0 in Eq. (2.98): The change of the variable ζ → ζ −1 yields another expression: (2.102) Substituting s = s j in Eq. (2.71), multiplying ∆ † − c jp j from right, and making an array by collecting them for j = 1, . . . , n yields (2.103) If K sc (x, y) = 0, this is the same as the "GLM ansatz" in Ref. [25]. Let us define where, in the last expression, at least one of ζ or s −1 must have +i0 to ensure the convergence at x = −∞. Then, the left Jost functions are rewritten as

Orthonormal basis of scattering eigenstates
The Jost functions Y ± (x, s) with s ∈ R, s 1 generally form a linearly independent basis for a given eigenvalue ǫ(s). However, the columns of Y ± (x, s) are not generally orthogonal to each other. Here, we construct an orthonormal basis. Let us define (2.109) Since I 2d + S (s) † S (s) is a positive-definite hermitian matrix, its square root inverse is defined unambiguously. Q(s) = I 2d for a reflectionless potential. Q(s) itself does not have a closed form, but its square can be written as 110) whose hermiticity is checked by using Eq. (2.36). Q(s) satisfies the following properties: If L is sufficiently large, the plane-wave asymptotic form (2.28) can be used as a value at x = ±L.
The expression for Ψ ± is given by Eq. (2.17). Using e ikxσ 3 = (cos kx)I 2d + (i sin kx)σ 3 and rewriting and using the formulae lim L→∞ e ikL = 0 (as a distribution) and lim L→∞

Completeness relation
Let us derive the completeness relation of the Jost functions. Using the completeness relation of the uniform system (2.81) and the integral representation (2.71), we obtain Using the equations introduced in Subsec. 2.10 and the relation where we omit +i0 and write s * = s since there is no convergence problem for this integral. Thus we get

Gap equation
This section includes the main result of this paper. We introduce the matrix BdG and gap equations, and solve them.

Gap equation in infinite-length system with finite reflection
The BdG and the gap equations considered in Ref. [25] are as follows. The BdG equation is where ∆(x) is a d × d matrix and u, v represent d-component vectors. This is the same as the matrix ZS problem (2.2). As a self-consistent condition, three kinds of gap equations are considered; namely, the gap equations for symmetric and antisymmetric ∆'s corresponding to p-wave and 16/32 s-wave superfluids:

2)
and non-symmetric ∆ corresponding to s-p mixed superfluids: Here, ν j ∈ [0, 1] denotes the filling rate for an eigenstate j, and g > 0 is a coupling constant. We use bold font (u, v) for quasiparticle eigenfunctions to avoid confusion with the filling ν. These equations appear when we consider a self-consistent solution in the BdG theory for unconventional and/or multicomponent fermionic condensates possessing S U(d)-symmetric two-body interaction. The s-p mixed superfluid is realized by fine tuning of coupling constants [25]. Note that Eq. (3.1) represents the BdG equation for right-mover eigenstates obtained by the Andreev approximation at k = k F . We also obtain left-mover ones linearized at k = −k F , but they need not be taken into account after elimination of double counting of the BdG eigenstates. (See Secs. IV and V of Ref. [25] for the Andreev approximation and the double-counting problem.) Since we are interested in an excited state near the ground state, we consider the following occupation state: (negative-energy scattering states), 0 (positive-energy scattering states), adjust to solve the gap equation (bound states).

(3.4)
Let us define then the gap equation is equivalent to: where τ = 0, σ 1 , and σ 2 for non-symmetric, symmetric, and antisymmetric ∆, respectively. We assume that ∆ has n bound states and write them as H = (h 1 , . . . , h n ), and use the same notations for the scattering states as in the previous section. The discrete sum of scattering states in Ξ eigenstates should be replaced by a continuous integral in the infinite-length limit. The orthonormal basis and its completeness relation derived in Subsecs. 2.11 and 2.12 imply that the infinite-length limit is given by where D i j = δ i j (2ν j − 1) and s c is a cutoff, which is related to the momentum cutoff k c by s c = The cutoff k c is determined by some other physical mechanism, e.g., the Debye 17/32 frequency in condensed matter. First, let us consider the uniform condensate ∆(x) = m∆ − . where scattering states are given by F o (x, s) = s∆ − I d e ik(s)x and no bound state exists. The coupling constant g, the cutoff k c , and the bulk gap m are related by If we eliminate g using this relation, we can take the limit s c → +∞ since the logarithmically divergent terms in Ξ gap and Ξ eigenstates cancel out. The resultant expression is (3.12) Note that the formal replacement 1 (3.14) Thus, the gap equation that we must solve is [σ 3 , Ξ + τΞ * τ] = 0 with this Ξ.

Proof of reflectionless nature
Let us solve the gap equation, Eq. (3.8) with (3.14) and (3.13). The proof consists of the following three steps (I)-(III): (I) Rewrite Ξ as follows: Ξ = Ξ bound + Ξ scattering with Step (I): This step is nothing but a straightforward calculation, but since it is a little complicated, we show several key mathematical expressions. 18/32 Let us write e j (x) = c j e −ik(s j )x , j = 1, . . . , n.
For brevity, we henceforth abbreviate −∞−i0 dζ as dζ. Then, the expressions of scattering and bound states are given by Here and hereafter, s and s * in Eqs. (3.22) and (3.23) should be interpreted as having an infinitesimal −i0 and +i0 if we encounter an integral not convergent for purely real s. Otherwise, their difference is ignorable. We also need an expression for the kernel K(x, x). Setting x = y in Eq. (2.120), we obtain K(x, x) † = K(x, x), i.e., it is self-adjoint. Using the self-adjointness, two expressions are possible:

28)
Henceforth, we calculate which is part of the integrand of Eq. (3.14). For brevity of description, in Eqs. (3.22) and (3.23), we which vanishes if we take a commutator with σ 3 . (Even without a commutator, its integration vanishes if a principal value is taken.) Next, we consider the term f 2 f ′ 2 . Using the partial fraction decomposition and Eqs. (3.24) and (3.25), (3.34) f 22a and f 22d provide a quantity relating to the filling rates of bound states: (3.35) Using Eqs. (3.28) and (3.29), Using Eqs. (3.22) and (3.27), In the same way, from Eq. (3.23) with s = ζ −1 and Eq. (3.26) Finally, (3.39) 20/32 Summarizing the above, let us write down FF † = 3 i, j=1 f i f j . The double integral terms in Eqs. (3.37)-(3.39) cancel out. We also recall that K = K sc + K bd = K † sc + K † bd . Thus, , (3.40) where R andR are eliminated by using their definition, Eq. (3.21). Let us integrate this expression by s. Recalling that ζ has an infinitesimal −i0, we write ζ = ζ R − i0. The Sokhotski-Plemelj formula says where p.v. denotes Cauchy's principal value. Thus, Using this, we obtain In the last expression, −i0 can be deleted (hence ζ = ζ * = ζ R ) since the integral converges without this factor. Thus, Ξ is finally written as which is equivalent to Eqs. (3.15) and (3.16). We note that, in the previous work (Ref. [35]), the delta function in Eq. (3.41) was not taken into account, and the final expression had an additional term, sgn s or H(−s) (Eq. (21) or (23) of Ref. [35]). If we correctly use Eq. where n is always even in the antisymmetric case, and pairs of bound states (see Subsec. 2.7.3) are labeled as h 2 j−1 and h 2 j (hence s 2 j−1 = s 2 j , θ 2 j−1 = θ 2 j ), and σ y is an ordinary 2 × 2 Pauli matrix. 21/32 From this, we find (3.49) N := diag(ν 2 , ν 1 , ν 4 , ν 3 , . . . , ν n , ν n−1 ). (3.50) N is obtained by exchanging ν 2 j−1 and ν 2 j in N. Thus, for non-symmetric, symmetric, and antisymmetric cases, the gap equation is where Ξ scattering is the same as above andΞ bound is  Let us consider the integral Thus, the self-consistent potential is reflectionless.

Asymptotics of n-soliton solutions
In this section, we consider reflectionless potentials, i.e., n-soliton solutions, in detail. We derive asymptotic formulae when solitons are sufficiently separated from each other. Although we do not need concrete expressions and asymptotic formulae for multi-soliton solutions when we solve the gap equation in Sec. 3, the formulae shown in this section are helpful to grasp the physical picture of multi-soliton states.

Reflectionless solution
Let us solve the GLM equation for the reflectionless case R(s) = 0 for real s. We assume that there are n bound states, and their eigenvalues are given by ǫ(s 1 ), . . . , ǫ(s n ), s i ∈ H. For convenience, we write As we will see below, the position of the j-th soliton X j is expressed as X j = x j + (additive constant arising from interaction between solitons). The kernel K and bound states H are  F(x, s)e −iǫ(s)t corresponds to F(x, t, s) of Ref. [25]. The gap function ∆ is given by Eq. (2.77), and, in the current case, ∆(x) = m∆ − − 2i(e 1 (x)p 1 , . . . , e n (x)p n )(I n + G(x)) −1 23/32

One-soliton solution
For later convenience, we first introduce the notations for the most basic one-soliton solution in the one-component system. We write s i = e iθ i , and hence κ i = m sin θ i . Then, let us define (4.9) Using the above notations, if ∆(x) is d × d, the one-soliton solution located at x = x 1 is as follows.
The bound state is given by and ∆(x) is If we introduce a unitary matrix D 1 such thatp 1 = D 1 (1, 0, . . . , 0) T , Thus, the structure of the one-soliton solution is actually very simple; that is, if we choose a basis such that ∆ becomes diagonal, one of diagonal elements is given by the one-soliton solution of the one-component system, and others are simply given by a constant. However, if the number of solitons increases, due to the arbitrariness of the choice ofp 1 ,p 2 , . . .p n , the solitons have various "angles", and hence the treatment becomes less easy compared to the one-component problem. However, if the solitons are well separated, each soliton can be approximated by the form of the one-soliton expression (4.11). In the next subsection, we derive such formulae.

Approximate formulae for isolated solitons
In this subsection, we provide approximate expressions of multi-soliton solutions when solitons are well isolated. The aim is the determination of∆ j 's and ∆ j (x)'s in Fig. 3. Now let us assume that solitons are sufficiently separated from each other, and assume x 1 ≪ x 2 ≪ · · · ≪ x n . We are interested in an approximate expression of j-th soliton near x ≃ x j . Let us consider how Eq. (4.5) is simplified if we focus only on the vicinity of j-th soliton x ≃ x j . The bound state h k rapidly decreases far from the k-th soliton; thus we can set H ≃ (0, . . . , h j , 0, . . . , 0). As for W, while e 1 , . . . , e j should be taken into account since they grow exponentially, the remaining e k> j (x) are ignorable. In the last term HG, though h k< j (x) are exponentially small, the product of h k and e k gives an O(1) term; thus they should be kept. Summarizing, Eq. (4.5) is approximated by where H j and W j are (2d) × j matrices made by the first j columns of H and W, and G j is a top-left j × j submatrix of G, andξ j = (0, . . . , 0, 1) T is a j-component unit column vector. The kernel K in this approximation is given by K(x, y) = H j (x)W j (y) † and ∆ is given by ∆(x) = m∆ − + 2iK 12   For brevity, we introduce a few more notations. S j := diag(s 1 , . . . , s j ), E j (x) := diag(e 1 (x), . . . , e j (x)), and (4.14) Q j is positive definite since G j is so. LetQ j be a ( j − 1) × j matrix such that the j-th row in Q j is deleted. Then, solving Eq. (4.13), we obtain the following formulae: The approximate expression for the j-th bound state h j (x) and the j-th soliton ∆ j (x) are given by where X j := x j + y j and the position shift y j , the coefficient vectorsq j ,r j , and unitary matrices∆ j are given by Note thatq j ,r j are normalized:q † jq j =r † jr j = 1. We define y 1 = 0,q 1 =p 1 ,r 1 = s 1 ∆ † −p1 , and∆ 0 = ∆ − . It also follows that∆ n = ∆ + . The position shift y j , which arises from soliton scattering, is always 25/32 non-negative y j ≥ 0, since e −2κ j y j ≤ 1. (Because of the fact that Q j 's are positive definite and 1 sin θ j = [Q j ] j j and the general inequality det H ≤ det H 11 det H 22 where H is positive-definite hermitian and H ii are its square diagonal blocks.)q j ,r j ,∆ j−1 , and∆ j are related bȳ ∆ jr j = s −1 jq j ,∆ j−1r j = s jq j , (4.20) m∆ j represents the constant value of the potential between the j-th and ( j + 1)-th soliton, ∆ j (x) and ∆ j+1 (x). We can obtain a few more recurrence relations for∆ j 's other than Eq. (4.21). If we define then Eq. (4.21) is equivalent to∆ Repeatedly using this, we obtain (recall that∆ 0 = ∆ − .) Here, we sketch how to derive the above formulae (4.15)-(4.24). We only show three key linearalgebraic formulae and omit a lengthy calculation. First, which is one of the Sherman-Morrison-Woodbury type formulas. Next, , (4.26) which is a slight rewriting of Eq.(4.14), but very convenient. Finally, let A be an invertible n × n matrix, and B an (n − 1) × (n − 1) matrix such that the n-th column and row of A are deleted. Let C be a cofactor matrix of A satisfying AC = CA = (det A)I n . (Note: the "cofactor matrix" may represent C T in other references.) From Jacobi's formula 3 [40], We note that C nn = det B by definition. Using this formula, we can prove . . .

C nn
The examples of usage of Eqs. (4.25)-(4.28) are as follows. For example, when we solve Eq. (4.13) by Cramer's rule, we need det(G j +ξ jξ † j ). This is rewritten as det(G j +ξ jξ † j ) = (1 +ξ † j G −1 jξ j ) det G j = det G j + det G j−1 ∝ e 2 j m det Q j + det Q j−1 , which provides the denominator of f basic and ∆ basic . As another example, the fact thatq j is normalized is shown as follows. Let C denote a cofactor matrix of Q j . Then,q † jq j = 1 sin θ j det Q j−1 det Q j k,l C jk C l jp † In Ref. [25], it is said that the stationary-class potentials are the same as the "snapshots" for every time t of the soliton solutions for the matrix NLS equation. Here we demonstrate it. Instead of the bare equation i∆ t = −∆ xx + 2∆∆ † ∆, by gauge transformation ∆ → ∆e −2im 2 t , m > 0, we discuss then it has the finite-density uniform solution ∆(t, x) = m∆ − , where ∆ − is a unitary matrix. The Lax pair for Eq. (5.1) is with isospectral condition ǫ t = 0. Rewriting this to the equivalent AKNS system is straightforward. Let us find the time-dependence of the scattering matrix A(s) and B(s) of the Jost function. Let us assume that ∆(t, x) obeys the matrix NLS equation and satisfies the boundary condition ∆(t, ±∞) = m∆ ± . We write ∆(0, x) = ∆(x). LetỸ + (t, x, s) be a solution of Eqs. (5.4) and (5.5), with ǫ = ǫ(s) and the initial condition w(t = 0, x) = Y + (x, s). Though B is time-dependent near the origin, its asymptotic form at spatial infinities does not depend on time. So the asymptotic form ofỸ + can be easily obtained. We can soon check −B ± Ψ ± (x, s) = 2ǫ(s)k(s)Ψ ± (x, s)σ 3 , (5.7) so therefore the solution of Eq. (5.5) at x = ±∞ is given bỹ