Finite-gap CMV matrices: Periodic coordinates and a Magic Formula

We prove a bijective unitary correspondence between 1) the isospectral torus of almost-periodic, absolutely continuous CMV matrices having fixed finite-gap spectrum and 2) special periodic block-CMV matrices satisfying a Magic Formula. This latter class arises as spectrally-dependent operator M\"obius transforms of certain generating CMV matrices which are periodic up to a rotational phase; for this reason we call them"MCMV". Such matrices are related to a choice of orthogonal rational functions on the unit circle, and their correspondence to the isospectral torus follows from a functional model in analog to that of GMP matrices. As a corollary of our construction we resolve a conjecture of Simon; namely, that Caratheodory functions associated to such CMV matrices arise as quadratic irrationalities.


Introduction and main results
This paper studies two equivalent notions of interest, one in the spectral theory of certain unitary operators and the other in the theory of analytic functions mapping the unit disk into its closure. This connection between CMV matrices and Schur functions is no mystery to experts in either field. We introduce here our results from both perspectives because we find them striking and attractive, as well as not immediately equivalent.
We begin in the context of the title:
We will study those CMV matrices with almost-periodic coefficients {a k } k∈Z having full a.c. spectrum consisting of a fixed finite union of non-degenerate closed circular arcs E ⊂ ∂D. We denote the set of all such CMV matrices by T CMV (E). Topologically, T CMV (E) is a torus of dimension equal to the number of disjoint arcs in E. CMV matrices are natural objects of interest in the context of orthogonal polynomials on the unit circle (see [29,30]). This in part relies on the interesting fact that half-line CMV matrices C + , formed by setting a −1 = −1 above and restricting to ℓ 2 (N), are universal for cyclic unitary operators in the sense that, for any probability measure ν with infinite support on the unit circle ∂D, multiplication by the independent variable in L 2 (dν) is unitarily equivalent to some half-line CMV matrix C + . This discovery was made surprisingly recently by Cantero, Moral, and Velázquez [7] by considering the basis of L 2 (dν) generated by orthonormalizing {1, z −1 , z, z −2 , z 2 , . . .}. In comparison, it has long been known (see, e.g., [34]) that tridiagonal Jacobi matrices are universal models for selfadjoint operators with a cyclic vector. CMV matrices are also important, e.g., in the theory of random matrices and integrable systems [15,22], and for quantum walks [6]; see also [4,16].
The CMV basis is far from the only generating set for L 2 (dν). Letting b w be the elementary Blaschke factor for D vanishing at w, i.e. b w (z) := z − w 1 − wz (1.1) and denoting by w * = 1/w reflection with respect to ∂D, one has that b w (z * ) = b w (z) * = c w b w * (z), for some explicit unimodular constant c w . We can then suggestively rewrite the CMV basis above as instead being generated by orthonormalizing the sequence {1, b ∞ , b 0 , b 2 ∞ , b 2 0 , . . .}. For a fixed sequence of points {z k } k∈N ∈ D N with modulus bounded uniformly away from 1, if we denote by {B k } and {B * k } the families of Blaschke products one could just as well have spanned L 2 (dν) by the sequence {B 0 , B * 1 , B 1 , B * 2 , B 2 , . . .}. In [35], Velázquez showed that the structure of multiplication by the independent variable z in L 2 (dν) in the orthonormalization of this new generating set is related to CMV matrices via an operator Möbius transform; specifically, denoting by D + := diag N {0, z 1 , z 1 , z 2 , z 2 , . . .}, he showed that multiplication by z in L 2 (dν) is unitarily equivalent to the operator for some half-line CMV matrix C + . This theorem suggests we should study this new class of operator Möbius transforms of CMV matrices more closely: Definition 1.1 (MCMV matrices). Fix n ≥ 1 and let z = {z k } n−1 k=0 ∈ D n with z 0 = 0, {a k } k∈Z ∈ D Z , and ϑ ∈ R/2πZ. Denote by D 0 the 2n-periodic diagonal matrix D 0 := D 0 ( z) = diag Z {. . . , z n−2 , z n−1 , z n−1 , z 0 | z 0 , z 1 , z 1 , z 2 , . . .}, (1.4) let Λ k (ϑ) be the 2n × 2n diagonal matrix Λ k (ϑ) := diag 2n {e ikϑ , e −ikϑ , . . . , e ikϑ , e −ikϑ }, and define Λ(ϑ) := k∈Z Λ k (ϑ). (1.5) With this notation, the (whole-line) MCMV matrix for z, {a k }, and ϑ and is defined by (1.6) where C = C({a k }) is the CMV matrix associated to {a k } k∈Z .
We shall explain the role of Λ(ϑ) momentarily and be even more specific in Section 3. In short, this diagonal matrix enables us to change from periodicity up to a rotational phase to pure periodicity.
Like CMV matrices, an MCMV matrix A is again band-structured. If we split A into 2n × 2n blocks A ij , then A ij = 0 if |i − j| > 1. Moreover, the off-diagonal blocks are of the form A i,i−1 = v i δ ⊺ 2n−1 and A i,i+1 = u i δ ⊺ 0 for some explicit vectors u i , v i ∈ C 2n ; cf. Lemma 4.7 and the figure below: (1.7) Furthermore, since operator Möbius transforms preserve unitarity, MCMV matrices are likewise unitary operators. Thus MCMV matrices can be viewed as being "block-CMV". This special structure does not hold for arbitrary operator Möbius transforms of CMV matrices; it follows in our case from D 0 having periodically repeated zero entries. We denote the class of all MCMV matrices associated to z ∈ D n by A( z) := A({a k }, ϑ; z) : {a k } ∈ D Z , ϑ ∈ R/2πZ (1.8) and give special consideration to the subset A per ( z) ⊂ A( z) of periodic operators, i.e.
A per ( z) := A ∈ A( z) : S 2n A = AS 2n , (1.9) where, as usual, S is the right shift operator. Notice that the usual CMV matrices are simply the special case where z k = 0 for all k. This realization gives rise to a natural (if somewhat ill-posed) question: is there a "best" generating set of Blaschke products for a given measure ν on ∂D? In the context of whole-line CMV matrices C ∈ T CMV (E), we offer an affirmative answer: Theorem 1.2 (Periodic Coordinates for finite-gap CMV matrices). Let E ⊂ ∂D be a disjoint union of g + 1 non-degenerate closed circular arcs. There exists a sequence z E := {z k } g k=0 ∈ D g+1 of points depending only on E such that, denoting there is a unitary bijection between T CMV (E) and T MCMV (E); i.e.
T CMV (E) ≃ T MCMV (E). (1.11) In particular, for an almost-periodic CMV matrix C with purely absolutely continuous spectrum E, there exists an associated CMV matrix C = C({a k }) with phase-periodic coefficients a k+2(g+1) = e −2iϑ a k , k ∈ Z (1. 12) such that C is unitarily equivalent to the periodic MCMV matrix A({a k }, ϑ; z E ) ∈ A per ( z E ) and the spectral measures of the one-sided restrictions C + and A + coincide; cf. (4.5).
Remarks. (i) The above theorem shows that a periodic MCMV matrix is naturally related to two different CMV matrices: the almost-periodic CMV matrix in T CMV (E) and the "generating" phase-periodic CMV matrix. Throughout, we will denote the former by C with parameters {a k } (resp. {ρ k }) and the latter by C with parameters {a k } (resp. {ρ k }).
(ii) As a consequence of (1.12), the operator b −D0 (C) is periodic up to a phase. By conjugating it with Λ(ϑ) -and this is the main purpose of introducing such a diagonal matrix -we get that A({a k }, ϑ; z E ) becomes periodic in the standard sense. This is particularly important in view of Theorem 1.4 below, since by Naiman's lemma [21] an operator satisfying the right-hand side of (1.20) is necessarily periodic.
To summarize, we prove that for CMV matrices in T CMV (E), there is a "best" basis of the L 2 -space related to its half-line truncation C + in the sense that the associated MCMV matrices are periodic. In fact, we can completely characterize this basis via the vector z E , as well as the MCMV matrices A in T MCMV (E), in terms of a "Magic Formula" analogous to that of Damanik, Killip, and Simon [8]. Thus MCMV matrices are a unitary analog of GMP matrices [9,37]. To formulate our result, we first need to introduce a certain rational function called the discriminant.
Fix a finite-gap set E ⊂ ∂D, that is, a finite disjoint union of non-degenerate closed circular arcs. Let us refer to the arc-components as bands and to the connected components of ∂D \ E as gaps. We assume that the number of gaps (and respectively bands) is g + 1. For any point z 0 in the domain C \ E, there exists an Ahlfors function w z0 which maximizes the modulus of the derivative at z 0 (or, in the case z 0 = ∞, maximizes lim z→∞ |zw ∞ (z)|) among all analytic functions on C \ E with modulus bounded by 1; cf. [1,12]. This extremal property defines w z0 uniquely up to a unimodular multiplier. In the right normalization, these Ahlfors functions for E ⊂ ∂D have the symmetry property in particular, the zeros of w ∞ can be obtained by reflecting the zeros of w 0 with respect to ∂D. In terms of these functions, we can define a special function, which we call the generalized discriminant, related to the set E: . For a finite union of non-degenerate closed circular arcs E ⊂ ∂D, the generalized discriminant is defined by (1.14) By (1.13), we see that ∆ E is real-valued on ∂D; since |w z0 (z)| = 1 for z ∈ E in the sense of nontangential limits and |w z0 (z)| < 1 for z ∈ C \ E, it follows that The function ∆ E has 2(g + 1) poles, half of which lie inside the unit disk. Moreover, there is exactly one critical point (i.e., a zero of ∆ ′ E ) in each band of E and each gap of E. While ∆ E maps all critical points in bands to −2, the critical points in gaps have ∆ E -value strictly greater than 2. For more details on the Ahlfors function and the discriminant, we refer to Appendix A (where in particular these properties are proven). As will be crucial for our analysis, we define z E to be some fixed ordering of the poles of ∆ E inside D, i.e. (1.16) At the same time, there is a natural way of associating a rational function to an MCMV matrix and define the monodromy matrix T A by Denoting by j the signature matrix j = 1 0 0 −1 , we note that T A (z) * jT A (z) ≤ j for z ∈ D while T A (z) * jT A (z) = j when z ∈ ∂D. Functions of that type are called j-inner matrix functions and equation (1.18) represents a factorization of T A into so-called elementary Blaschke-Potapov factors of the first kind. The study of general j-contractive matrix functions and their multiplicative structure goes back to Potapov [27]. Now, let B(z) = z n−1 j=1 b zj (z) = det T A (z) and consider the rational function We will show the following key result: Theorem 1.4 (Magic Formula for MCMV matrices). Fix a finite disjoint union of g + 1 non-degenerate closed circular arcs E ⊂ ∂D, and let z E be as in (1.16). Then, for any A ∈ A( z E ), 20) and in this case On the other hand, for fixed z ∈ D n and A 0 ∈ A per ( z), one has that σ( , and consequently (1.21) The Magic Formula reveals further structure of MCMV matrices relating to the discriminant ∆ E . For simplicity, let us assume that the poles of ∆ E are simple (as this is typically the case). In that case, since ∆ E is real-valued on ∂D, it can be expressed in the form (1.22) We will abuse the notation for residues and define With (1.22) in mind, to understand the Magic Formula we need to understand each of the operators b z k (A). In Lemma 4.8 we show that these can be represented by an operator Möbius transform of the same underlying CMV matrix C, but related to the "shifted" diagonal operator Hence, at the outermost diagonal of ∆ E (A) only one of the summands is non-vanishing. We have illustrated this for the off-diagonal block of c 1 b z1 (A) + c 1 b z1 (A) −1 in the case n = 4 below: (1.24) Here * and ⋆ indicate nonvanishing entries of b z1 (A) and b z1 (A) −1 , respectively. In general, the outermost nonvanishing entry of b z k (A) is the (2k, 2(g + 1 + k))-entry. Since all the other summands in (1.22) are vanishing at this position, the magic formula fixes the corresponding value of b z k (A), i.e.
That this is a consequence of the structure of MCMV matrices will be proved in Theorem 4.9.
The above structure reveals an important property of MCMV matrices compared to their self-adjoint analog, GMP matrices. The relation (1.25) already indicates the importance of the values Res z * k ∆ E . In order for b z k (A) to be well-defined, they should be nonzero. In fact, for a general MCMV matrix A the values | Res z k ∆ S −2nj AS 2nj | are necessarily bounded away from zero. We will later see (cf. Lemma 4.3) that This should be compared with [37,Theorem 3.3], where such a property was part of the definition of GMP matrices and guaranteed the existence of certain resolvents analogous to b zj (A). It is natural that we do not need this condition since we are in the setting of unitary operators.
We also point out that the generalized discriminant for MCMV matrices involves the Ahlfors functions associated to two different points; in contrast, the analogous object for GMP matrices involves only the Ahlfors function at infinity. This discrepancy has the consequence that the associated MCMV matrices are even-periodic with half of the gaps closed (cf. Appendix A). While this could be avoided using a different discriminant, doing so would introduce further complications elsewhere. In particular, the benefits of defining the discriminant as we have are 1) we can treat the even-and odd-periodic CMV cases uniformly, and 2) our discriminant is a rational function.

Consequences for Schur and Caratheodory functions
Of course, one cannot discuss CMV matrices without discussing Schur functions. A Schur function is an analytic function f : D → D mapping the open unit disk to its closure. We denote by S the class of all Schur functions. Provided f ∈ S is not a finite Blaschke product, the Schur algorithm determines an infinite sequence of parameters {a k } ∈ D N , also known as Schur parameters; conversely, any sequence {a k } ∈ D N determines a function f ∈ S by an associated continued fraction expansion (see, e.g., [29]). For our purposes, it is more convenient to denote the Schur algorithm in terms of equivalences of projective lines, i.e.
where v 1 ∼ v 2 if and only if there exists some nonzero λ ∈ C such that v 1 = λv 2 . In correspondence to Schur functions are Caratheodory functions, analytic functions F from D to the right half-plane normalized such that F (0) = 1. A Caratheodory function F can be determined from a function f ∈ S by or in terms of projective lines by In this latter language it is clear this process is invertible, so indeed this correspondence is one-to-one. Caratheodory functions have a Herglotz integral representation as for a unique probability measure ν F on ∂D, and are thus in correspondence with probability measures on the unit circle. Consequently, Schur functions can be put into correspondence with half-line CMV matrices, in the sense that for a given half-line CMV matrix C + , there exists f + ∈ S such that (1.28) and in fact one can check this f + is the Schur function with parameters {a k } k∈N agreeing with the coefficients of C + = C + ({a k }). Similarly, whole-line CMV matrices have two associated Schur functions, one corresponding to each half-line. Specifically, if {a k } k∈Z ∈ D Z and C = C({a k }) is the associated CMV matrix, then one has that where f + is the Schur function with parameters {a k } k∈N and f − is the Schur function with parameters {−a −1 , −a −2 , . . .}; cf. [13,25]. Any Schur function f has a natural factorization where ν f is a non-negative measure on ∂D, τ ∈ R/2πZ (in fact, τ is the argument of f (0)), and the sequence {w k } of zeros satisfies the Blaschke condition (1 − |w k |) < ∞. We define The description of S + (E) as being equivalent to T CMV (E) in the finite-gap setting (and for even more general sets) was known already to Peherstorfer and Yuditskii [25]; specifically, they showed that, for f + ∈ S + (E), the corresponding sequence {a k } ∈ D Z is almost-periodic. This perspective gives us an alternative way of stating our main results: membership in S + (E) is equivalent to the existence of an E-dependent Nevanlinna-Pick type interpolation, analogous to (1.26), whose coefficients are periodic up to a rotational phase. Theorem 1.5. Fix a finite disjoint union of g + 1 non-degenerate closed circular arcs E ⊂ ∂D, and let z E be as in (1.16). Then f + ∈ S + (E) if and only if for some {a k } ∈ D 2(g+1) and some ϑ ∈ R/2πZ such that As an immediate corollary of Theorem 1.5 (cf. (1.27)), we resolve a conjecture of Simon [30, Conjecture 11.9.6]: Corollary 1.6. Fix a finite-gap set E ⊂ ∂D. For any C ∈ T CMV (E), the Caratheodory function F + associated to the half-line restriction C + is a quadratic irrationality; i.e., there exist polynomials a(z), b(z), and c(z) such that F + solves Real numbers which are quadratic irrationalities (with a, b, and c above as integers) are precisely those having eventually periodic continued fraction expansions. If one understands the interpolation of Theorem 1.5 as a special continued fraction expansion for the Schur function f + , Corollary 1.6 should come as no surprise; indeed, our result shows that almost-periodicity of the Schur parameters associated to absolutely continuous finite-gap CMV matrices is actually a consequence of an underlying periodicity which the Schur algorithm was too naïve to see.

Methods and structure of the paper
The relationship of CMV matrices to orthogonal polynomials was discovered by Cantero, Moral, and Velázquez in 2003 [7]. Soon thereafter, the relationship of operator Möbius transforms of CMV matrices to the study of orthogonal rational functions on the unit circle was studied in work of Velázquez [35]. We recall these relationships in Section 2 to motivate the following construction, as well as to prove a coefficient stripping formula for Caratheodory functions associated to bases of orthogonal rational functions.
Our approach to MCMV in the context of reflectionless operators is based on the functional model for the same, developed initially for Jacobi matrices by Sodin and Yuditskii [33] and later adapted for Schur functions and CMV matrices by Peherstorfer and Yuditskii [25]. Using the ideas developed by Eichinger and Yuditskii for GMP matrices (the Jacobi analog of MCMV matrices, cf. [9,37]) and comparing this construction to that of Velázquez proves one direction of the equivalences in Theorems 1.2, 1.4, and 1.5. We review the functional model for CMV matrices and reveal the corresponding MCMV structure in Section 3.
Having motivated our class of MCMV matrices and shown that finite-gap CMV matrices correspond to periodic MCMV matrices, we perform a direct spectral analysis for periodic MCMV matrices after reviewing the corresponding classical analysis of CMV matrices (cf., e.g., [29,30,31]) in Section 4. We also analyze the structure of a general MCMV matrix in Section 4.3.
Finally, we use the tools developed in Sections 2 through 4 to completely resolve the proofs of Theorems 1.2, 1.4, and 1.5 in Section 5.

Orthogonal rational functions
The aim of this section is to establish a coefficient stripping formula for Caratheodory functions associated to bases of orthogonal rational functions (ORF). Our main result, Theorem 2.2, is an analog of the Stieltjes expansion for m-functions of Jacobi matrices [31, Theorem 3.2.4] and of Peherstorfer's formula for orthogonal polynomials on the unit circle (OPUC) [29,Theorem 3.4.2]. It will play an important role in Section 4 where we seek to solve the direct spectral problem for periodic MCMV matrices.

The Szegő recursion
Given a nontrivial (i.e., of infinite support) probability measure ν on ∂D, one obtains the monic orthogonal polynomials Φ k := Φ k (z, ν) by orthogonalizing the family {1, z, z 2 , . . . } in L 2 (dν). The Φ k 's are known to satisfy a recurrence relation of the form for some sequence {a k } ∞ k=0 of numbers in D. Though it may look strange, we purposely write −a k in (2.1) so that the a k 's coincide with the Schur parameters (introduced in Section 1.2). Following [29], we shall also refer to the a k 's as Verblunsky coefficients and recall there is a one-to-one correspondence between such sequences (in D N ) and nontrivial probability measures on ∂D. Φ * k is the reversed polynomial, that is, While the notation of * is convenient, it is ambiguous. It depends on the class L k := span{1, z, . . . , z k } and has a different meaning for L k and L j when k = j. Note that the operation ϕ → z k ϕ(z * ) acts as an involution on the subspace L k . We shall also use this abuse of notation for bases of orthogonal rational functions (where naturally z k is substituted by the Blaschke product corresponding to the poles of the first k basis elements). Applying * for the class L k+1 to (2.1) yields the so-called Szegő recursion .
For the orthonormal polynomials ϕ k := Φ k / Φ k , this recursion takes the form with U (·) as in (1.17).

Coefficient stripping for orthogonal rational functions
Let z 0 = 0 and fix a sequence of points Note that (2.4) is trivially satisfied if sup k |z k | < 1 (which will always be the case in our setting). Recall that by {B k } we denote the family of finite Blaschke products Given a nontrivial probability measure ν on ∂D, let {ϕ k } ∞ k=0 be the corresponding sequence of orthonormal rational functions obtained by orthogonalizing the family {B k } ∞ k=0 in L 2 (dν). With L k := span{ϕ j : 0 ≤ j ≤ k}, the associated * -operator is now defined by ϕ * (z) = B k (z)ϕ(z * ) for ϕ ∈ L k . Defining and choosing the right unimodular constants in the normalization (in particular, ϕ 0 ≡ 1), the ϕ k 's satisfy the recurrence relation (see [5,Theorem 4

.1.3] and [35])
. (2.5) The rational functions ψ k of the second kind are defined by and satisfy almost the same recurrence relation as ϕ k , namely .
Note that the coefficients a k in (2.5)-(2.6) belong to D and are explicitly given by cf. [5,Theorem 4.1.2]. Conversely, starting from arbitrary coefficients {a k } ∞ k=0 ∈ D N one can generate a sequence of rational functions by (2.5) and show that they are orthogonal with respect to some probability measure ν on ∂D. This is the content of the following known results: Then the associated Caratheodory function F ν k can be written as and ν k converges weakly to some probability measure ν which, in turn, is the unique measure of orthogonality for {ϕ k }. In particular, uniformly on compact subsets of D.
Note that if z k ≡ 0, then (2.5) reduces to the standard Szegő recursion (2.2). Just as for orthogonal polynomials on the unit circle, there is a one-to-one correspondence between coefficient sequences {a k } ∈ D N and nontrivial probability measures on ∂D. In fact, the theory for ORF generalizes the one for OPUC. We mention in passing that the assumption (2.4) ensures the measure ν of orthogonality be unique.
We are now ready to derive the promised coefficient stripping formula. Let z 0 = 0, z 1 , . . . , z p−1 be a finite number of points in D and consider the sequence {z k } ∞ k=0 obtained by periodic extension of the initial p values (i.e., z k+p = z k for all k). Note that in this situation the Blaschke condition is trivially violated. Let Our result then reads as follows: and M can be expressed in terms of the ORF and the rational functions of the second kind by .
Proof. Due to (2.5) and (2.6), we have Iterating p times, starting from k = p − 1, the factors in front of the U 's cancel (due to telescoping) and it follows that where W is the transfer matrix Using a superscript (j) for the objects related to to the shifted sequence {a k+jp } ∞ k=0 , we obtain in a similar way that Hence, and considering the second row of this identity (or rather its transpose) yields Due to (2.8) and (2.9), we now obtain (2.11) by passing to the limit as n → ∞ in (2.15). Finally, the identity (2.12) follows directly from (2.13) and (2.16).
If the sequence {a k } is periodic (or periodic up to a rotational phase) and the period matches the period of the sequence {z k }, then our result simplifies and (2.11) turns into a quadratic equation for F ν . The result becomes particularly simple if we pass from the relation for Caratheodory functions to the one for Schur functions: is periodic up to a rotational phase with period p and phase −2ϑ, and let f ν denote the associated Schur function; see (1.27). If

The functional model
In the last two decades a significant amount of progress has been made in understanding reflectionless onedimensional operators as being related to multiplication operators on certain subspaces of Hardy spaces associated to multiply-connected Riemann surfaces. We broadly refer to this construction as a "functional model" for the associated operators. In this section, we first recall the requisite definitions to develop such models, followed by the specific model of Peherstorfer-Yuditskii for almost-periodic CMV matrices, and finally associate to this our model for MCMV matrices.

Hardy spaces of character automorphic functions
Fix a finite-gap set E ⊂ ∂D. By means of the Koebe-Poincaré uniformization theorem, the spectral complement in the Riemann sphere C \ E is uniformized by the disk D; that is, there exists a Fuchsian group Γ and a meromorphic function z : D → C \ E with the following properties: We fix this map by requiring z map the interval (−1, 1) onto a fixed connected component of ∂D \ E. Due to this choice, there exists a fundamental domain F for the action of Γ which is symmetric with respect to complex conjugation, i.e.
We denote by Γ * the group of unitary characters of Γ ; that is, group homomorphisms from Γ into T := R/2πZ. By the covering space formalism, Γ is group isomorphic to the fundamental group π 1 (C \ E), and so Γ * ∼ = T g (where g + 1 is the number of gaps of E).
Let H 2 = H 2 (D) denote the usual Hardy space of the unit disk. For α ∈ Γ * , we consider the Hardy space of character automorphic functions More generally, we define the larger space L 2 (α) as the space of those functions f : ∂D → C which are square integrable and α-automorphic: Naturally, H 2 (α) ⊂ L 2 (α) via the identification of a function f ∈ H 2 with its radial limit function on the boundary. For finite-gap sets E, a fundamental result of Widom [36] implies that H 2 (α) is nontrivial for all α ∈ Γ * . This in fact applies to all subsets E ⊂ C of so-called Parreau-Widom type (see, e.g., [14] for details). By continuity of the point evaluation functional, H 2 (α) admits a family of reproducing kernels {k α (ζ, By the reproducing property and nontriviality of H 2 (α), it follows that k α (ζ 1 , ζ 1 ) > 0. We may thus define the corresponding normalized vectors by and note that We will sometimes abbreviate k α ζ1 (ζ) := k α (ζ, ζ 1 ) and K α ζ1 (ζ) : . We shall make frequent use of (3.5) (as well as (3.7) below) in our computations.
For finite-gap sets E, the map α → K α (ζ 1 , ζ 1 ) is continuous for every ζ 1 ∈ D. This property is in fact responsible for the almost-periodic structure of the CMV matrices in T CMV (E), compare with Theorem 3.4 below. We mention in passing that this type of continuity in the character is known to hold for all Parreau-Widom sets E ⊂ C satisfying the so-called Direct Cauchy Theorem (see, e.g., [14]).
For a fixed ζ 1 ∈ D we denote by the Blaschke product with zeros at the orbit of ζ 1 under Γ , and with φ = φ(ζ 1 ) normalized such that b(ζ 1 , ζ 1 ) > 0. As follows directly from (3.6) and (3.1), we have If we want to suppress the dependence on ζ, we may also Interestingly, since E ⊂ ∂D, Thus we may represent the uniformization z as a ratio of distinguished Blaschke products: where z(ζ 0 ) = 0 as before and φ 0 ∈ T is some phase. Since z is automorphic, it follows that µ ζ0 = µ ζ0 ; we will abbreviate this common character by µ 0 .
In the coming subsections, we will study multiplication by this uniformization map z as a linear operator on L 2 (α) with respect to different bases. To this end, we require a technical lemma on reproducing kernels which allows us to effectively compute residues. We first recall the following orthogonal decomposition of H 2 (α): Proof. Using the reproducing kernel property, it is clear that k α ζ1 ∈ K b ζ 1 (α). Conversely, let f ∈ H 2 (α) and suppose f ⊥ k α ζ1 . Then 0 = f, k α ζ1 = f (0) and since f is character automorphic, f (γ(0)) = 0 for all γ ∈ Γ . The standard factorization theorem for H 2 functions and a comparison of the characters now imply that f ∈ b ζ1 H 2 (α − µ ζ1 ).
Proof. By our assumptions and Lemma 3.1, On the other hand, as g/b ζ1 ∈ H 2 (α), we also have This completes the proof.

The Peherstorfer-Yuditskii model for CMV matrices
To motivate the MCMV functional model, we first recall the functional model for the usual CMV matrices. Everything that follows in this section is in some way already presented in the literature. We try to be quite precise anyway, because we feel that the meaning of the additional parameter τ ∈ R/2πZ has not really been discussed yet in terms of the functional model. Moreover, it will give us an understanding of the notion of periodicity up to a phase in CMV matrices, which will be important in the later part of our paper. Let (α, τ ) ∈ Γ * × T and define For φ 0 given by (3.10) we define, for every l ∈ Z, (3.14) It is straightforward to see that for any τ ∈ T, {x α,τ 0 , x α,τ 1 } and {y α,τ 0 , y α,τ 1 } form two distinct orthonormal bases of the two-dimensional subspace Iterating this decomposition exhausts H 2 (α) (and in fact, the larger space L 2 (α)); in particular, we have the following: Almost-periodic absolutely continuous whole-line CMV matrices with spectrum E arise exactly as multiplication by z in the basis {y α,τ k } k∈Z : Theorem 3.4 (Peherstorfer-Yuditskii [25]). Multiplication by z in the basis {y α,τ k } k∈Z is a CMV matrix C(α, τ ) with almost-periodic Verblunsky coefficients given by

16)
Remark. Peherstorfer and Yuditskii actually studied the family of Schur functions f α,τ given by , (3.18) but this is equivalent by equality of Schur parameters and Verblunsky coefficients. This perspective explains the necessity of including the parameter τ ; we wish to completely classify such Schur functions, not merely classify them up to a rotation.
We can see this theorem via the LM structure by alternating between the basis {y α,τ k } and the dual basis we have the following Lemma 3.5. With notation as above, , it follows from the reproducing kernel property that Using (3.10), the lemma follows by algebraic manipulations.
Proof of Theorem 3.4. We can shift the relations in the previous lemma to see that, taking then M sends the basis {y α,τ k } k∈Z to {z(x α,τ k )} k∈Z and L sends {x α,τ k } k∈Z to {y α,τ k } k∈Z . Thus, we have that multiplication by z in the basis {y α,τ k } is given by C = LM , which is a CMV matrix with precisely the Verblunsky coefficients a k (α, τ ) as above.
We conclude by pointing out that the CMV matrix C(α, τ ) is periodic if and only if φ 0 ∈ 2πQ and there exists N ≥ 1 such that µ 0 N = 0 Γ * . If only the latter holds (i.e., φ 0 / ∈ 2πQ), then C(α, τ ) is periodic up to a phase with phase e −iN φ0 .

A modified basis suited for periodicity
We have seen in the previous subsection that whether the isospectral torus of CMV matrices consists of periodic or almost periodic operators is related to whether there exists N ≥ 1 such that b ζ0 b ζ0 N can be lifted to a single valued function on C \ E. In this section we will study a basis associated to Blaschke products which have this property, and by definition the corresponding multiplication operator in this basis will be periodic. To fix the notation, let and let β := β z denote its character. Our condition on the vector z is that β is a half-period (i.e., 2β = 0 Γ * ).
Remark. The Ahlfors function shows by example that this condition always can be met and a function as in (3.20) indeed exists. Recall that w ∞ denotes the Ahlfors function of C \ E and the point ∞. If E has g + 1 gaps, w ∞ has exactly g zeros in D, say z 1 , . . . , z g , and one zero at ∞. Moreover, |w ∞ | = 1 on E and |w ∞ | < 1 in C \ E. From this it follows that the pullback of zw ∞ , that is, w ∞ := z(w ∞ • z) is a function with the properties mentioned above, with n = g + 1; see Appendix A for a more detailed discussion.
Denoting by z l the pullback of b z l to the uniformization, i.e.
we see that there exists a certain phase φ l such that This allows us to decompose H 2 (α) by iterations of the finite-dimensional subspace

23)
without shifting the character. This lack of shift is ultimately what will lead to periodicity up to a phase. Our strategy is as follows: suppose we have a vector z ∈ D n with associated Blaschke product B as above having character β a half-period. Similar to CMV matrices, we will have one step comparing symmetric pairs ζ l , ζ l corresponding to shifting from a pole z l inside the disk to its symmetric point z * l outside the disk; this corresponds to the representation which we can iterate to exhaust K BB * (α) as follows: As in the CMV case, we will be able to act on even steps by a 2 × 2 block-diagonal operator M to alternate between dual bases respecting the symmetric poles on each two-dimensional subspace H 2 (α) ⊖ b ζ l b ζ l H 2 (α − 2µ l ). However -and this is the difference relative to CMV matrices -in the odd steps we wish to pass from the pole ζ l to the new pole ζ l+1 . Of course, since K α ζ k / ∈ H 2 (α) ⊖ b ζ l b ζ l H 2 (α − 2µ l ) when ζ k = ζ l , something new is required to perform this shift. In this sense, the fundamental lemma allowing for our analysis is the following simple realization: Lemma 3.6. For any α ∈ Γ * and with z l , z k ∈ D and ζ l , ζ k as above, we have Since b ζ l is unimodular on the boundary, the adjoint in H 2 of multiplication by b ζ l (and consequently z) is multiplication by b −1 ζ l (respectively z −1 ). Thus, by computing adjoints and applying the reproducing property, one has Now the way ahead is clear: we apply Lemma 3.6 to expand the shifted reproducing kernel in terms of the reproducing kernels for the previous pole. Let α ∈ Γ * and ζ l , ζ k ∈ D be as above and define where at the removable singularity ζ k = ζ l we take Proof. That such a decomposition exists is precisely the content of Lemma 3.6 and (3.24).
Using Lemma 3.2 and (3.22), the numerator can be written as and we thus arrive at the expression for c α 1 (ζ k , ζ l ) in (3.26). Plugging in ζ k to (3.29) makes the left-hand side vanish and we deduce that the coefficient in front of K α ζ l is given by c α 2 (ζ k , ζ l ) as in (3.27). Equation (3.30) follows by applying the operation f (ζ) → f (ζ) to (3.29).
Since the decomposition in the previous lemma is not orthogonal, we do not immediately get a nice Pythagorean identity; however, if we define then we have that Lemma 3.8.
In particular, for c α 1 , c α 2 defined in (3.26)-(3.27), (3.32) follows immediately from the Pythagorean identity after expanding k α ζ k in the two }. It remains to show (3.33). With (3.22) in mind, we see that (3.32) is equivalent to A simple calculation shows that and (3.22) implies Thus we have as claimed.
Combining all of the above results, we arrive at Proposition 3.9.
When ζ l = ζ k , this simplifies to Proof. Multiplying the identity (3.35) through by (1 − z l z) −1 , the first line of the identity is simply (3.30). The second line follows from (3.29), the first line, and an application of (3.33).
Define also ω α k,l := arg c α 1 (ζ k , ζ l ) (3.40) and note that ω α l,l = 0 due to our normalization b ζ l (ζ l ) > 0. Then the content of the previous proposition is that, considering the cases ζ k = ζ l and ζ k = ζ l+1 , respectively, We are finally ready to establish our basis. Let

Taking as convention ϑ
, and ζ n = ζ 0 , we define for 0 ≤ l ≤ n − 1 the functions In analog to the CMV case (3.15), they form two different bases of the 2n dimensional subspace K BB * ; cf. (3.23). Letting p = 2n, we extend this family of functions (for j ∈ Z) by By iterating the exhaustion (3.23), it isn't difficult to see that the systems of functions {x α,τ k } k∈I and {y α,τ k } k∈I each form an orthonormal basis of H 2 (α) for I = N. By [9, Lemma 3.5], it follows that they also form a basis of L 2 (α) when I = Z.
Let us comment on the meaning of the unimodular constants appearing in the definitions above. First of all, we can choose the unimodular constant freely in the normalization of the basis functions {x 0 , x 1 }. This explains the meaning of the additional parameter τ . Once this normalization is fixed, the normalization of the following basis functions is already determined: comparing (3.35) and (3.42), we see that -apart from the additional parameter τ -the main difference between the constant matrix on the right-hand side of (3.35) and the matrix Θ(α, τ ; ζ l , ζ l+1 ) in (3.42) is that the latter has positive off-diagonal entries. This has been achieved by adding the phase e iω α l+1,l to the reproducing kernels. These phases accumulate with each step as the phases e iϑ α l . Define now the periodic up to a phase Verblunsky coefficients {a k (α, τ ; z)} by Then multiplication by z in our modified basis is represented by an MCMV matrix with the above parameters: Theorem 3.10. Let C = C(α, τ ; z) be the periodic up to a phase CMV matrix with Verblunsky coefficients a k (α, τ ; z) and let D 0 be the 2n-periodic diagonal matrix given by (1.4). Then with respect to the basis {y α,τ k } k∈Z of L 2 (α), multiplication by z is represented by the MCMV matrix b −D0 (C). Proof. Denote by We use liberally the following two simple observations: that diagonal matrices commute, and that, for a ∈ D, ρ = 1 − |a| 2 , and θ 0 , θ 1 ∈ R/2πZ, Let 0 ≤ l ≤ n − 1. With the above facts in hand, it is then clear that (3.41) is equivalent to

Multiplying both sides by
Since in the context of (3.42) we have e iφ l = e iφ l I (where I is the 2 × 2 identity matrix), that equation can also be written as

Multiplying both sides by
Extending to all l follows similarly from the definitions. Denote now by D 0 := D 0 ( z) the 2n-periodic diagonal matrix in (1.4), let η D0 = 1 − D 0 D * 0 , and fix where Θ k acts on the two-dimensional subspace {δ k , δ k+1 }. Combining the statements (3.51) and (3.52) above, we have shown the following: where y α,τ is shorthand notation for the vector (y α,τ k ) k∈Z . Since the operators η −1 D0 , z − D 0 , and 1 − zD 0 * commute with M (for they are orthogonal sums of multiples of I along the odd terms), taking C = LM we have which can be rearranged as Thus, in the basis {y α,τ k }, multiplication by z is given by b −D0 (C). Of course, (3.35) and (3.36) in combination with the exhaustion (3.23) without shifted character imply a transfer matrix relation in terms of the reproducing kernels. To see explicitly this relation, first note that, denoting as shorthand a k = a k (α, τ ; z), ρ k = ρ k (α; z), and U (a k ) as in (1.17), we can rewrite (3.51)-(3.52) in the following way: (3.55) Again using the notation that z n = z 0 = 0, η n = η 0 , etc., and denoting and we arrive at the following monodromy relation: Proof. This follows from iterating (3.54)-(3.55) over a full period of size p = 2n, since the multiplier terms telescope and x α,τ 2n = BB * e −iϑ α n−1 x α,τ 0 , y α,τ 2n = BB * e iϑ α n−1 y α,τ 0 .
Remark. In terms of projective lines, (3.58) is, up to a phase, precisely the relation (2.18) for the Schur functions (3.18).
We are now ready to give a detailed explanation for introducing the matrix Λ(ϑ) in Definition 1.1. In our extension (3.46) of the vectors {y α,τ l } 2n−1 l=0 to a basis of L 2 (α), we added a phase e iϑ α n−1 in order to represent the multiplication operator by z as an operator Möbius transform of a CMV matrix C(α, τ ; z); the phase was needed to have the off-diagonal entries of Θ l (α, τ ; z) in its LM factorization positive. The price we paid is that the corresponding matrix is merely periodic up to a phase. If we had chosen the extension of multiplying by (BB * ) j without the phase, then the corresponding operator would have been periodic. This alternative basis, say {y α,τ per,l }, is related to {y α,τ l } in the following way: To sum up, we have obtained a map from Γ * × T to T MCMV (E):

Corollary 3.12. Let C, D 0 be as in Theorem 3.10 and set
Then A ∈ A per ( z) and σ(A) = E. Moreover, with ∆ A as in (1.19), and In particular, in the special case z = z E we have A ∈ T MCMV (E) and Proof. The first statement follows from the discussion above. The fact that σ(A) = E is clear since A is the matrix of multiplication by z. If we setT = B −1 T , then Theorem 3.11 states that (BB * ) −1 is an eigenvalue ofT . As detT = 1, we obtain (3.60). Finally, (3.61) is a direct consequence of the fact that multiplication by BB * corresponds to the action of S 2n in the basis {y α,τ per,l } l∈Z .

Direct spectral theory
In the previous section, we saw that the functional model developed by Peherstorfer and Yuditskii to represent finite-gap almost-periodic CMV matrices has corresponding representations as periodic MCMV matrices. In this section, we develop the necessary tools to address the converse: that any periodic MCMV matrix arises from such a functional model. For periodic CMV matrices C ∈ T CMV (E), the bijective nature of this correspondence is by now classical (cf. (4.4) below); we recall the elements of its construction in Section 4.1. A key component of this correspondence is a set of spectral data associated to the one-sided restriction C + , called the divisor or Dirichlet data, which, together with the discriminant, allows one to uniquely recover the spectral measure and hence the operator C + . We will adapt this construction to periodic MCMV matrices in Section 4.2, culminating in the uniqueness statement Proposition 4.6. Finally, Section 4.3 explores the block structure (1.7) of general MCMV matrices and its invariance under certain Möbius transformations.

The isospectral torus of periodic CMV matrices
Let {a k } k∈Z be a periodic sequence with even period p = 2n and let C be the corresponding whole-line CMV matrix. If we define the discriminant by then the spectrum of C is given by This spectrum is purely absolutely continuous and of multiplicity two. Moreover, there are p critical points As we have seen, the isospectral torus T CMV (E) is a g + 1-dimensional torus. In particular, the spectrum does not uniquely determine the operator C. In order to get the full spectral data to solve the inverse problem, we consider the half-line operator C + with spectral measure ν. One can show there are explicit rational functions u, v such that, for a suitable branch of the square root, the associated Caratheodory function is given by .
It is known that u has precisely one zero in each gap of E, and if u(z) = 0, then ∆ 2 C (z) − 4 is either −v(z) or v(z). A zero of u for which the numerator in (4.1) does not vanish corresponds to an eigenvalue of C + . Note that for closed gaps, the numerator always vanishes. Let {x j } g j=0 be the set of all zeros of u which lie in open gaps, and let us write (x j , 1) if x j is an eigenvalue of C + and (x j , −1) otherwise. Then the spectrum together with the divisor D = {(x j , ε j )} g j=0 form the full spectral data and determine C completely. In fact, if we define with the identifications (λ ± j , −1) ∼ (λ ± j , 1), then D(E) equipped with the product topology of circles is homeomorphic to Γ * × T and hence also to T CMV (E). Inspired by results in the framework of Jacobi matrices [33], this has been generalized in [25] to the much more general class of Parreau-Widom sets E ⊂ ∂D satisfying the Direct Cauchy Theorem. The homeomorphism is called the generalized Abel map; it is the map which completes the following diagram: In the finitely connected setting, the Abel map is well understood (see, e.g., [19,28]). The connection to spectral theory of Jacobi matrices goes back to Akhiezer [2]; see also [3,18,20].

Spectral theory for periodic MCMV matrices
In this section we will perform a spectral analysis for MCMV matrices that are periodic up to a phase. The spectral data will be given by the discriminant and zeros of a certain function which is explicitly defined in terms of the orthogonal rational functions. For CMV matrices it is easy to see that the leading coefficient of the discriminant is positive, and the discriminant is always of maximal degree. For MCMV matrices, however, the situation is more involved; we shall clarify the degree issue in Lemma 4.3.
First we have to clarify what we mean by a half-line MCMV matrix. Recall that given a vector z = {z 0 , . . . , z n−1 }, cf. Definition 1.1. Given A ∈ A per ( z) we will study the half-line MCMV matrix (4.5) associated to the sequence of Verblunsky coefficients {a k } ∞ k=0 and z. Specifically, in (4.5), C + = C + ({a k } ∞ k=0 ) is the half-line CMV matrix with Verblunsky coefficients {a k } ∞ k=0 and D + denotes the diagonal operator D + = diag{z 0 , z 1 , z 1 , · · · , z n−1 , z 0 |z 0 , · · · }. Since A ∈ A per and D + is diagonal, it follows that C + is periodic up to a phase with phase e −2iϑ . Due to [35,Theorem 5.4.], the measure ν of orthogonality for the family of orthonormal rational functions related to the poles {z 0 , z 1 , z 1 , . . . , z n−1 , z 0 |z 0 , . . . } and Verblunsky coefficients {a k } ∞ k=0 is precisely the spectral measure for A + (and the cyclic vector δ 0 ). The main result of this section will be an explicit expression for the Caratheodory function F ν analogous to (4.1). where M and W = W (0) are the matrices defined by (2.10) and (2.14), respectively. The shape of M ϑ is such that This follows from the commutant relation Hence, if we consider the sequence {a j } ∞ j=0 which is obtained by extending {a j } p−1 j=0 in such a way that a j+kp = e −2ikϑ a j , then due to Corollary 2.3 and (1.27), the associated Caratheodory function satisfies Using again the simple observation we find that and thus Moreover, (2.16) and (4.8) show that jY −1 0 W ϑ (z) ⊺ Y 0 j = M ϑ (z). So we conclude that tr W ϑ = tr M ϑ = tr T . Recalling that , a direct computation shows that for the rotated rational functions ϕ p,ϑ = e −iϑ ϕ p , ψ p,ϑ = e −iϑ ψ p , ϕ * p,ϑ = e iϑ ϕ * p , and ψ * p,ϑ = e iϑ ψ * p . The discriminant ∆ A defined by (1.19) can therefore be written as where What follows is a detailed study of properties of ∆ A . This will enable us to give a complete description of the spectral measure of A + by means of a uniquely associated divisor D. With (4.10) in mind, the analog of the Lyapunov exponent (see, e.g., [24]) in our periodic setting is given by (4.14) provided the limit exists. We shall shortly relate the Lyaponov exponent to the discriminant. The lemma below is critical when showing that Lemma 4.1. For every z ∈ C with z / ∈ {z * j : 0 ≤ j ≤ n − 1} the limit exists and satisfies L(z) ≥ 0.
Remark. In fact, we will see that L(z) = 0 if and only if z ∈ ∆ −1 A ([−2, 2]) = σ(A) ⊂ ∂D. Proof. Let λ 1 (z), λ 2 (z) be the eigenvalues of W ϑ (z). Then by the spectral radius formula we have Moreover, since the inequality | det N | ≤ N 2 holds for every 2 × 2-matrix N and det W ϑ = B 2 , we see that on C \ D the limit exists and satisfies L ≥ 0. To show that this also holds inside D, we first note that due to (2.13) and (4.10), Hence, and we now apply the Christoffel-Darboux formula (see [5,Theorem 3 recalling that b z kp (z) = z and |ϕ 0 | = 1. So for fixed z ∈ D we have a uniform lower bound on W ϑ (z) k , and this implies that L(z) ≥ 0.
The following lemma collects several important properties of ∆ A . It is the analog of [30, Theorem 11.1.1] for CMV matrices and thus we seek to keep the proofs rather short by merely indicating where adaptations are needed.
Proof. (i) The key is that B(z) −1 W ϑ (z) ∈ SU(1, 1) if z ∈ ∂D and to use that the trace of a matrix in this class is real. The statement then follows by analytic continuation.
It follows from the lemma that E := ∆ −1 A ([−2, 2]) ⊂ ∂D is a finite-gap set with at most p gaps. As before, we denote by g + 1 the number of open gaps in E and by λ ± j their gap edges. In the following it will be important to know that ∆ A is a rational function of degree p. But, in fact, an even stronger statement is true: Lemma 4.3. Fix j and let q be the number of times z j appears in the vector z. Then where the constant C depends only on z. Moreover, Proof. Note that (4.18) follows from the fact that ∆ A is real. Let us first assume that q = 1 and z j = z 0 = 0. In that case, (4.17) is equivalent to |(tr T )(0)| > C. It will be more convenient to consider the product which clearly has the same trace as T . We have and notice that V is a transfer matrix associated to the poles z 1 , . . . , z n−1 and with coefficients −a 0 , . . . , −a p−3 (in reverse order). Therefore, by (2.13), a short computation shows that for the corresponding ORF of degree p − 2. Due to (4.19), it suffices to show that is uniformly bounded from below. For arbitrary z j , due to cyclic rotation, we would have obtained the same, just with orthogonal rational functions associated to different coefficients, respectively a different measure. Recall that we can always "push" the matrix e iϑ 0 0 e −iϑ to the end of the product by means of the commutant relation Hence, it suffices to show that is uniformly bounded from below for arbitrary orthogonal rational functions whose poles are supported on the set {z * i : 0 ≤ i ≤ n − 1, i = j}. It is well-known that any Caratheodory function F satisfies the uniform bounds Since sup j |z j | < 1, this gives positive constants c 1 , c 2 such that for every Caratheodory function F and every z j , we have that c 1 < |F (z j )| < c 2 . By the same reasoning, using the Christoffel-Darboux relation it is not hard to see that there exists a function m(r) such that for all measures ν, all n, and all z obeying |z| < r, we have |ϕ * n (z, ν)| > m(r) > 0; cf. [5, Lemma 9.3.1]. Hence we obtain a constant c 3 such that |φ * (z j )| > c 3 uniformly.
Combining the two previous lemmas, we obtain the following explicit representation for ∆ A : Lemma 4.4. Let z be the uniformization of C \ E. Following the notation of (3.20), we have

21)
where Ψ • z = n−1 Proof. Consider the function It follows from Lemma 4.3 that H(z) has no poles and is thus harmonic in C \ E. Furthermore, since E := ∆ −1 A ([−2, 2]), we have H(z) = 0 for z ∈ E. By the maximum principle, it follows that H(z) ≡ 0. In particular, by an application of (3.8) we have from which the lemma follows.
We are now ready to characterize the spectrum of A + . Since det M ϑ = B 2 , (4.9) shows that F ν can be written as where u, v are explicitly given by and where the branch of the square root is chosen such that F ν (0) = 1 and then extended analytically to Write dν(t) = ν ac (t) dt 2π + dν s (t), with ν s singular to dt/2π. Using the standard inversion formula, we see that dν s is a finite sum of point masses and ν ac is explicitly given by (4.25) The following lemma will be important to characterize the point masses of ν. Proof. First we show that all zeros of u lie on ∂D. Since |ϕ p,ϑ | < |ϕ * p,ϑ | on D and |ϕ p,ϑ | > |ϕ * p,ϑ | on C \ D (see [5,Corollary 3.1.4.]), the assertion ϕ p,ϑ (z) = ϕ * p,ϑ (z) (i.e., u(z) = 0) implies that z ∈ ∂D. Next we show that, at a point z 0 ∈ ∂D with ϕ p,ϑ (z 0 ) = ϕ * p,ϑ (z 0 ), we have that | tr ∆ A (z 0 )| ≥ 2. First observe that this holds for a matrix N ∈ SL(2, R) with N 21 = 0; indeed, an application of the inequality of arithmetic and geometric means shows that | tr N | ≥ 2 det N = 2. Set M = B(z 0 ) −1 M ϑ (z 0 ). Due to (2.10), M can be written as M = Y 0 U Y −1 0 for some U ∈ SU(1, 1). Viewed as fractional linear transformations, Y 0 maps the unit disc D into the right half plane H + and U preserves the unit disc; thus, conjugating M further by we can transform M into an element of SL(2, R). It remains to note that, due to (2.12), M 21 is zero, and that this property (in addition to the determinant and the trace) is preserved under conjugation by R.
Since ∆ A is of degree p, there are exactly p gaps of E. As the square root in (4.25) is analytically extended, it changes sign in every gap. In order to retain the positivity, the denominator must also admit a sign change and this implies there is a zero of u(z) in every gap. Since u(z) is of degree at most p, we find that there is exactly one zero in each gap. Proposition 4.6. We can uniquely recover the measure ν from the divisor D and the discriminant ∆ A . Specifically, the absolutely continuous part of ν is given by (4.25) and if ε j = 1, the point mass at x j has the weight Proof. If x j is a zero of u, then -using again that det Hence the sign of the square root determines whether or not x j is a point mass of ν. If the numerator in (4.23) does not vanish, it is given by 2 ∆ 2 A − 4 and the weight of the point mass can be computed as in (4.27). Note that up to a multiplicative constant, u is defined by its zeros. This follows from (4.24) and the fact that ν is a probability measure. Hence, D and ∆ A determine ν uniquely.
For finite-gap sets E, Lemma 4.5 (and the comment thereafter) defines a map from {A ∈ A per ( z) : σ(A) = E} to D(E): namely, given A ∈ A per ( z), we consider the associated half-line operator A + , compute its Caratheodory function F ν as (4.23), and find and label the zeros of u(z) as (4.26). Proposition 4.6 shows this map is one-to-one. That this map is also onto is true in general, and will be shown for the special choice z = z E in Section 5.

The structure of a general MCMV matrix
In this section we demonstrate the "block-CMV" nature of MCMV matrices as in (1.7), as well as a structural stability under taking Möbius transformations related to the points in the generating vector z. This structure will be critical to understanding our main theorems; indeed, in light of viewing the discriminant as in (1.22), the Magic Formula would be a complete mystery without developing some understanding of the structure of b zj (A) for a general MCMV matrix A ∈ A( z).
Our analysis further illustrates the similarities between our MCMV matrices and their self-adjoint analog, GMP matrices. First, GMP matrices are block-Jacobi; below, we show the band structure (1.7) of an MCMV matrix in Lemma 4.7. Additionally, one of the characteristic properties of GMP matrices is that they are stable under taking resolvents (cf. [37,Definition 1.12]); the analogous statement for MCMV is Proposition 4.8. The consequences for the Magic Formula in the setting of (1.22) are the content of Theorem 4.9. Finally, we use all of this structure to prove a uniqueness result for certain rational functions of our MCMV matrices in Proposition 4.10; this will be used to prove the Magic Formula in the end.
Let us again fix a vector z ∈ D n having z 0 = 0, and recall that D 0 denotes the diagonal operator defined in (1.4) and C denotes a general CMV matrix. We begin by proving the block structure of an MCMV matrix A; up to conjugation by diagonal matrices, we may consider instead the simpler operator Let us splitÃ into matrix blocks of size 2n × 2n and denote the blocks byÃ ij . For simplicity, we assume throughout this section that z is such that z j = z k for j = k. Without this assumption, the block structure below will split into smaller blocks.

Lemma 4.7.Ã is band structured andÃ
In particular, and note that u satisfies the recursion relation The left-hand side of this identity can be written as Introduce the matrices Since z kn = 0, we see that for j = kn − 1 and j = kn this becomes respectively. Hence we see that the recursion for the blocks {u 2kn , . . . , u 2(k+1)n−1 } is decoupled. The finite band structure and (4.29) is now a direct consequence of the structure of CMV matrices. It remains to prove (4.30). Let now u = (1 + CD * 0 ) −1 (C + D 0 )δ 2n . We are interested in the block {u 0 , . . . , u 2n−1 }. In this case, (4.31) leads to and for 1 ≤ j < n − 1, Iterating this leads to Using the fact that we arrive at It is straightforward to see that Due to the simple structure of M 0 and N n−1 , it follows that and hence it suffices to study C [ 1 0 ] . A direct computation shows that and combined with the identities this allows us to iterate the procedure. It only remains to comment on the sign and for this we note that Therefore, This concludes the proof.
A key feature of the MCMV structure is its stability under Möbius transformations. Following the notation of Appendix B, we can write the operator Möbius transform defined in (1.3) as In particular, this holds for S = z j 1, where 1 denotes the identity matrix in ℓ 2 , i.e., b zj (A) is the standard Blaschke factor evaluated at A ∈ D ℓ 2 : In addition to the usual diagonal operator D 0 , we likewise define shifted diagonal operators (4.34) With this notation, we can explicitly describe how the Blaschke factors associated to the generating vector z "shift" MCMV matrices: Proposition 4.8. Let D j and V j be defined as above. Then for any C ∈ D ℓ 2 , we have Proof. Let Recalling that η j = 1 − |z j | 2 , it is straightforward to see that . Due to (3.34), we have and hence This concludes the proof.
Remark. So far we haven't used that z j ∈ z; however, this assumption is important to retain the banded structure of an MCMV matrix. In particular, since b zj (z j ) = 0, applying b zj to A shifts the zeros in D 0 by 2j. It follows that S −2j b zj (A)S 2j is again MCMV-structured (with a new generating vector).
We have analyzed the block structure and Blaschke shifts of MCMV matrices in order to understand the Magic Formula in the context of the representation (1.22). Critical to this understanding is computing off-diagonal blocks of selfadjoint operators of the form Re(c j b zj (A)) (cf. (1.24)); this is essentially the content of the final theorem of this section: Theorem 4.9. Let z be such that z j = z k for j = k and A ∈ A( z). Then In particular, Proof. Equation ( Since tr(N ) = tr(jN ⊺ j), we obtain (4.37) by inserting j 2 between all factors.
For the final result of this section, the adaptation of the proof for repeated points in the vector z is not straightforward, so we will drop our assumption of distinct z j 's at this place. By reordering the entries of z, if needed, we can assume that the entries of z with higher multiplicity are ordered consecutively; if z j = z j+1 we will denote both by z j . For a given vector z, suppose there are m ≤ n distinct entries z j , and let m j denote the multiplicity with which the point z j appears in z, such that m 0 + · · · + m m−1 = n. We call a rational function ∆ suitable for z if it is of the form The following result will be used in the proof of Theorem 1.4 and relies fundamentally on the structure of MCMV matrices obtained in Theorem 4.9. Proof. Let us assume for the sake of simplicity that e iϑ = 1 (the general case is analogous). The key of the proof will be to understand the structure of the powers A i for 1 ≤ i ≤ m 0 . Due to Proposition 4.8, the structure of the powers b zj (A) i will then follow by shifting. Recall that m 0 denotes the multiplicity with which z 0 = 0 is represented in the vector z. We analyze the structure of the 2n × 2n block of A formed by the entries {A ij } 2n−1 i,j=0 . Since we have seen that the block structure is obtained by repeated z 0 entries, our 2n × 2n block splits up into m 0 −1 diagonal blocks of size 2×2 and a possibly bigger diagonal block of size 2(n−m 0 +1)×2(n−m 0 +1). Let us denote the 2 × 2 block matrices by A 1 0 , . . . , A m0−1 0 and the larger block by A. On each side of the diagonal blocks A l 0 , there is a column vector of size 2; we write v l 0 for the vector to the left of A l 0 and u l 0 for the vector to the right. Similarly, on each side of A there is a column vector of size 2(n − m 0 + 1), which we will denote analogously by v and u. In particular, for 0 ≤ l ≤ m 0 − 2 we find due to (4.36) and (4.37) that A 2l,2l+2 = ρ 2l ρ 2l+1 > 0. Moreover, the first entry of u is the nonzero value A 2(m0−1),2n = ρ = 0; this entry is the only non-vanishing entry of A on the 2(n − m 0 ) + 2th diagonal. If we consider A 2 , we get exactly two non-vanishing entries on the 2(n − m 0 ) + 4th diagonal, given by A 2(m0−2),2n = ρ 2(m0−2) ρ 2(m0−2)+1 ρ and A 2(m0−1),2(n+1) = ρρ 2n ρ 2n+1 . Similarly, A i will have i non-vanishing entries on the outermost 2(n−m 0 )+2ith diagonal; in particular, A m0 will have m 0 non-vanishing entries on the 2nth diagonal given by 2j,2(n+j) = ρ 2j ρ 2j+1 · · · ρ 2(m0−2) ρ 2(m0−2)+1 ρρ 2n ρ 2n+1 · · · ρ 2(n+j) ρ 2(n+j)+1 , A m0 2(m0−1),2(n+m0−1) = ρρ 2n ρ 2n+1 · · · ρ 2(n+m0−2) ρ 2(n+m0−2)+1 for 1 ≤ j < m 0 − 1. A similar structure, but shifted, is obtained for all the matrices b zj (A) i . With this structure in mind we can finish the proof. We first consider the entries ∆(A) j,2n+j . On this diagonal, only the operators b zj (A) ±mj have non-vanishing entries (and we can guarantee that they are non-vanishing). But all of them are at different positions. Hence, we see that c mj ,j = 0 for all 0 ≤ j ≤ m. In the next step, we consider the diagonal ∆(A) j,2n−2+j and obtain analogously that c mj −1,j = 0. Inductively we see that all coefficients vanish, and consequently ∆ ≡ 0.

Proofs of the Main Theorems
We have laid nearly all the groundwork necessary to complete the proofs of our main theorems. Before we proceed, we recall the general strategy: in Section 3, we showed that, in the presence of a function B = B z having half-period character, there is a map F : Γ * × T → A per ( z) taking unimodular characters and a phase (α, τ ) to a periodic MCMV matrix F(α, τ ) := A(α, τ ) satisfying a Magic Formula. In Section 4, we defined a map G assigning a divisor to a periodic MCMV matrix. In this section we glue these constructions together via a third and final map, the Abel map A : D(E) → Γ * × T, and show that together they in fact form a commuting diagram in analog to (4.4): Once this is done, our main theorems will follow as special cases for a particular choice of function B and vector z E . To properly introduce the Abel map, we briefly recall the construction from [25] of the bijective correspondences between the isospectral torus T CMV (E), the set of Schur functions S + (E) defined in (1.33), and the set of divisors D(E) given by (4.2).
We first set up the correspondence S + (E) ≃ D(E). Consider a function f + ∈ S + (E) and let Strictly speaking, is not a Caratheodory function since it does not admit the normalization F − (0) = 1; however, it still maps D analytically into the right half-plane. Using (1.31), we see directly that the function has no zeros in the gaps of E. Hence, since t → Im F (e it ) is decreasing on each gap, this function can have at most one sign change per gap, caused by a possible pole x j (where j indexes the j-th gap). The measure ν in the integral representation of F is purely absolutely continuous on E, and condition (1.30) implies that ν ac is split equally between F + and F − . But due to (1.31), the point mass at x j can only correspond to either F + or F − and not to both. We write (x j , 1) if x j is a pole of F + and (x j , −1) if x j is a pole of F − . Special consideration is needed for the endpoints of the gaps: by convention, we write (λ − j , 1) if Im F ≤ 0 in the closed gap [λ − j , λ + j ] and (λ + j , 1) if Im F ≥ 0. With these choices, the collection is the divisor in D(E) associated to f + . Conversely, one can show that any divisor D ∈ D(E) leads to a function f + ∈ S + (E) (see [25,Theorem 1.4] for details). The correspondence T CMV (E) ≃ S + (E) is implicitly given in Section 3.2. The half-line restriction C + of an element C = C(α, τ ) ∈ T CMV (E) is linked to the Schur function f α,τ + given by (5.2) through the relation 1 + zf α,τ + (z) 1 − zf α,τ + (z) = C + (α, τ ) − z −1 C + (α, τ ) + z δ 0 , δ 0 .
In fact, the map (α, τ ) → f α,τ + sets up a bijection between Γ * × T and S + (E). This also enables us to define the Abel map A : D(E) → Γ * × T by The first lemma of this section demonstrates that the diagram (5.1) commutes if we replace A per ( z) with the image F(Γ * × T). LetD be the divisor associated to the periodic operator A(α, τ ) ∈ A per ( z) by our construction in Section 4.2, that is,D = G (F(α, τ )). Therefore, the identitiesf + = f α,τ + andF + = F α,τ + follow by comparing (4.9) with Theorem 3.11. We've now seen that (x j , 1) corresponds to poles ofF + . To see A(D) = (α, τ ), we only need to show that (x j , −1) as defined in Section 4 corresponds to poles of the functionF − . Due to (4.23), we havẽ where v(e it ) ∈ R and u(e it ) ∈ iR.
Consider now the functionF Since ∆ A (e it ) 2 − 4 ∈ iR on E, we obtain thatF + (e it ) =F − (e it ) for all e it ∈ E. Hence, ifx j is a zero of u and the numerator in (5.5) vanishes (i.e., ε j = −1), thenF − has a pole atx j . This concludes the proof.
Remark. Note that (5.5) and (5.6) show that the absolutely continuous parts of the corresponding measures agree and are given by (4.25).
In order to show commutativity of (5.1), we still have to show that for a given z, there is a functional model construction F which surjects onto those MCMV matrices in A per ( z) having a fixed spectrum; that is, for an arbitrary z and A ∈ A per ( z) with spectrum σ(A) = E, there exists a function B = n−1 j=0 b ζj whose character is a half-period such that A = A(α, τ ) corresponding to the functional model associated to B. To show that the character of B is a half-period, the representation of ∆ A from Lemma 4.4 will be crucial. where A(D) = (α, τ ) and A(α, τ ) is defined by (3.59) for the functional model associated to B = n−1 j=0 b ζj . Moreover, the diagram (5.1) commutes.
Proof. Suppose A is periodic and let D be the associated divisor. Write (α, τ ) = A(D) for the corresponding character and phase. Due to Lemma 4.4, we have Since the characters of B and B * coincide and Ψ is single-valued, the character of B must be a half-period. Suppose now that A(α, τ ) is the matrix representing multiplication by z in the basis {y α,τ k } for the functional model associated to B. Then, by Lemma 5.1, the Caratheodory functions of A and A(α, τ ) coincide and hence we obtain (5.7). Since it was already shown in [25] that the Abel map is a bijection, this also proves the commutativity of (5.1).
We are now in position to complete the proofs of our main results stated in the introduction. As already mentioned in the first remark of Section 3.3, the existence of the Ahlfors function for an arbitrary finite-gap set E ensures that there always is a function B with the property that its character is a half-period. To be specific, the function w ∞ = z(w ∞ • z) is an explicit choice of such a function. Our first main result, Theorem 1.2, now simply follows as the special case of Proposition 5.2 with z = z E (and B = w ∞ ).
Proof of Theorem 1.2. For the special choice of z = z E , we define the map F : Γ * × T → T MCMV (E) in the same way as was done previously. That is, where A(α, τ ) is the matrix representation of multiplication by z in the basis {y α,τ k } for the functional model associated to the function B = w ∞ . Our considerations have shown this F is a bijection and that concludes the proof of (1.11).
Tracing through our construction will also lead to the more explicit version of the one-to-one correspondence between an element C ∈ T MCMV (E) and its counterpart C ∈ T CMV (E).
Our analysis also leads to a quick proof of the Magic Formula, our second main result.
Proof of Theorem 1.4. Let A be an element of A( z E ). If A ∈ T MCMV (E), then we know from Proposition 5.2 that A = A(α, τ ) for some choice of character and phase. Hence the Magic Formula ∆ E (A) = S 2(g+1) + S −2(g+1) follows by Corollary 3.12 with n = g + 1.