Differential Equations for Approximate Solutions of Painlev´e Equations: Application to the Algebraic Solutions of the Painlev´e-III (D 7 ) Equation

. It is well known that the Painlev´e equations can formally degenerate to autonomous differential equations with elliptic function solutions in suitable scaling limits. A way to make this degeneration rigorous is to apply Deift–Zhou steepest-descent techniques to a Riemann–Hilbert representation of a family of solutions. This method leads to an explicit approximation formula in terms of theta functions and related algebro-geometric ingredients that is difficult to directly link to the expected limiting differential equation. However, the approximation arises from an outer parametrix that satisfies relatively simple conditions. By applying a method that we learned from Alexander Its, it is possible to use these simple conditions to directly obtain the limiting differential equation, bypassing the details of the algebro-geometric solution of the outer parametrix problem. In this paper, we illustrate the use of this method to relate an approximation of the algebraic solutions of the Painlev´e-III (D 7 ) equation valid in the part of the complex plane where the poles and zeros of the solutions asymptotically reside to a form of the Weierstraß equation.

where α, β, γ, δ ∈ C are parameters.The Painlevé-III (D 7 ) equation is a special degenerate case in which γ = 0 and αδ ̸ = 0.For more information on this equation see, for example, Kitaev and Vartanian [11].With the choice of parameters α = 8, β = 2n, and δ = −1, namely the Painlevé-III (D 7 ) equation admits an algebraic solution for each n ∈ Z. Specifically, define functions R n (ζ) for n ∈ Z via the recurrence relation Examples of these functions are If n ≥ 0, then R n (ζ) is a polynomial in ζ known as an Ohyama polynomial [12].The unique on the Riemann surface of x 1/3 algebraic solution to (1.1) is u(x) = u n (x), where , n ∈ Z.
The u n (x) are rational functions of x 1/3 .If one selects the principal branch for x 1/3 , then each of these produces three distinct algebraic solutions on the complex plane: u n (x) and u n e ±2πi x .Some examples are u −2 (x) = 9x 5/3 + 12x + 5x 1/3 See Clarkson [5] for additional background on these functions.The Painlevé-III (D 7 ) equation (1.1) is invariant under the symmetries u(x) → ±iu(±ix), n → −n, and it is easily seen that ±iu n (±ix) = u −n e ±2πi x .In this paper, we will assume that n ≥ 0 and also restrict attention to the principal sheet −π < arg(x) < π.
It is natural to introduce a scaled independent variable y via x = n 3/2 y. (1.2) Under this scaling, plots show that the zeros and poles of u n n 3/2 y appear to be confined for n large to a "bow-tie" shaped bounded region in the Y -plane with Y := y 1/3 that is asymptotically independent of n.See Figure 1.In [4], the limiting region was characterized precisely and it was proved that on the unbounded exterior of this region, the related function U n (y) := n −1/2 u n n 3/2 y converges as n → ∞ to the solution Ȗ (y) of the cubic equation

Formal degeneration of Painlevé-III (D 7 )
Motivated by this convergence result, we may refine the scaling (1.2) by introducing a second parameter z ∈ C and considering as a function of z for fixed y ∈ C.Then, the Painlevé-III (D 7 ) equation (1.1) with parameter n for u n (x) implies Thus, neglecting the O n −1 error term, the Painlevé-III (D 7 ) equation formally degenerates to a one-parameter family, parametrized by y ∈ C, of autonomous differential equations that we write for an unknown Ȗ (z) = Ȗ (z; y): . (1.5) The cubic equation (1.3) corresponds to the solutions of (1.5) that are independent of z.For non-constant solutions, (1.5) can be multiplied by the integrating factor Ȗ ′ (z)/ Ȗ (z) 2 and then integrated once to obtain wherein E ∈ C is an integration constant.Setting Ȗ (z) = 1 4 y℘(z − z 0 ) − 1 24 yE for arbitrary z 0 one finds that ℘(z) solves the Weierstraß equation [13, Chapter 23] ℘ ′ (z) 2 = 4℘(z) 3   Thus, one might expect that the algebraic solutions might be locally approximated near a point y with y 1/3 in the bounded "bow-tie" region by a Weierstraß elliptic function of z with invariants depending on y and E. However, this formalism does not explain how E should be chosen given y, nor does it determine the offset z 0 , and it is not a rigorous argument.For that one can use a Riemann-Hilbert characterization of U n (y) that was also found in [4].We describe this characterization next.

Riemann
Here a subscript + (resp.−) refers to a boundary value taken on an oriented arc from its left (resp.right) side. Normalization: Behavior as η → 0: The limit of In the last two conditions, powers of −iη are taken to be cut on Σ − 0 ∪Σ − ∞ and agree with principal branches for large positive imaginary η.
The last property can be used to define a matrix

Main aims of the paper
The conditions of Riemann-Hilbert Problem 1.1 involve the large parameter n in an explicit way and there are well-known techniques originating in the Deift-Zhou steepest-descent method [6] for analyzing such problems.One needs to firstly control the large exponential factors in the jump matrices by introducing an appropriate scalar g-function.The difference between g and Φ is a function h whose derivative satisfies an algebraic equation defining the spectral curve.We show below that when y corresponds to a point in the "bow-tie", the spectral curve has genus 1 and that the landscape of Re(h) has the properties necessary to continue the analysis.The next step involves exploiting analytic factorizations of jump matrices to "open lenses" by moving certain factors off the jump contour onto nearby arcs.After this step, all jump matrices decay rapidly to the identity as n → ∞ except along certain arcs where in the same limit a nontrivial limiting jump matrix emerges instead.In the third step one uses the limiting jump matrix to define a Riemann-Hilbert problem for an approximation called an outer parametrix ; in addition one or more inner parametrices are needed near certain points where the convergence of the jump matrix is not uniform.One pieces together a global parametrix from the outer and inner parametrices to define an unjustified (at this point) approximation of the solution of the "opened lenses" problem.Finally, one proves a convergence theorem by showing that the matrix ratio of the unknown and its global parametrix solves a special kind of Riemann-Hilbert problem (a small-norm problem) for which the solution is uniformly close to the identity.In this scheme, the approximate formula for U n (y) comes from the outer parametrix.In the situation we discuss in this paper, that y corresponds to a point in the "bow-tie", this outer parametrix can be written explicitly in terms of theta functions of genus 1 and elliptic integrals.Actually, we first replace y with y+n −1 z as in (1.4) but use a g-function depending on y only, and then one obtains an approximate formula for U n y + n −1 z explicitly involving the independent variable z that should be related to the Weierstraß equation if the formal reasoning described in Section 1.2 above is correct.However, it is very difficult to prove such a connection directly from the approximation formula for U n y + n −1 z ; at the very least it is a calculation that is a complicated diversion from what should be a relatively simple path from (1.1) to (1.6) or (1.7).
Our aim in this paper is not to give all details of the convergence proof but rather we focus on explaining a reasonably effective way to make the connection between the outer parametrix Riemann-Hilbert problem -whose conditions are far simpler than the elliptic solution they generate -and the limiting differential equation (1.6).Our approach also determines the value of the integration constant E in (1.6) as a function of y (equivalently the value of both invariants in the Weierstraß equation (1.7) are so-determined).For those who would like to see the basic idea of this method illustrated in a simple setting, in Appendix A, we show directly (without reference to the known exact solution) that certain quantities derived from a toy Riemann-Hilbert problem satisfy simple differential equations.We originally learned this method from Alexander Its (see, for example, [8,Chapter 8] and [9]), and it is a pleasure to write this article in his honor.Considered as an algebraic relation between h η and η, this defines the spectral curve, which will have genus 1 provided c and y are chosen so that the three roots of the cubic µ → P (µ, y, c) are distinct and nonzero.Note that if y ̸ = 0, P (0, y, c) ̸ = 0 so µ = 0 cannot be a root.Let us label the three distinct nonzero (for y ̸ = 0 and generic c) roots by µ = s j , j = 1, 2, 3, so that P (µ, y, c) = −(µ − s 1 )(µ − s 2 )(µ − s 3 ).Let Σ 0,1 be an arc in the η-plane joining η = 0 to η = is 1 , let Σ 2,3 be an arc in the η-plane joining η = is 2 to η = is 3 , and assume that Σ 0,1 ∩ Σ 2,3 = ∅.
Then we may define h η (η, y, c) unambiguously using (2.1) by assuming that with the power function being cut on Σ − ∞ ∪ Σ 0,1 and coinciding with the principal branch for large positive imaginary η.
Next we attempt to determine c given y ̸ = 0 by imposing two real Boutroux conditions: where the path of integration in each case lies in the domain of analyticity of η → h η (η, y, c).
Although the latter domain is multiply connected, and hence I 1,2 and I 2,3 are only well-defined modulo a finitely-generated symmetry group, the conditions (2.2) are independent of the specific choice of paths due to the fact that h η (η, y, c) changes sign across its branch cuts and the fact that the residue of h η (η, y, c) at η = ∞ is real.If we introduce the real and imaginary parts of c by c R := Re(c) and c I := Im(c) so that c = c R + ic I , then we have from which it follows that if paths of integration are selected so that I 1,2 (y, c) and I 2,3 (y, c) depend smoothly on (c R , c I ), Therefore, the Jacobian determinant is the two integral factors on the right-hand side are complete elliptic integrals of the first kind over paths that form a basis for homology on the corresponding elliptic curve.Hence under the assumption that all four roots are distinct, the Jacobian is nonzero [7, Corollary 1].By the implicit function theorem, whenever a pair y ̸ = 0 and c ∈ C are such that both equations (2.2) hold and that P (•, y, c) has distinct roots, the solution (c R , c I ) of (2.2) can therefore be continued smoothly to nearby values of y ∈ C. We now assume that y and c are related so that the conditions (2.2) hold.The function η → h η (η, y, c) extends to a single-valued function on the two-sheeted Riemann surface R over the η-plane defined by the spectral curve (2.1).The differential h η (η, y, c) dη is meromorphic on R with double poles (in suitable local coordinates) at the two points over η = ∞ and at the branch point η = 0 and no other singularities.The residues of h η (η, y, c) dη at the two points over η = ∞ are opposite real values ± 1 2 and then the residue at η = 0 necessarily vanishes.It follows that if c is such that the conditions (2.2) hold, then the multi-valued function h(η, y, c) defined on R up to an integration constant by contour integration of h η (η, y, c) dη has a real part that is single-valued on R. Selecting the integration constant such that Re(h(η, y, c)) vanishes at any one of the points η = is j , j = 1, 2, 3 (and hence at all three of them by (2.2)), the projection of the zero level set of Re(h(η, y, c)) to either sheet of R is the same set in the η-plane, which we denote by K.
It is known [4, Theorem 3] that for large n, u n n 3/2 Y 3 is pole-and zero-free for Y on an unbounded domain E whose complement is a "bow-tie" shaped region B := C \ E in the Y -plane that is symmetric with respect to reflection in the real and imaginary axes.The interior of B is the disjoint union of two "wings", one on either side of the imaginary axis.The wings are joined at the origin only, and they are bounded in part by the straight-line segments joining the pairs Y = ± 2 1/3 /3 1/2 e iπ/6 and ± 2 1/3 /3 1/2 e 5πi/6 .The set B ∩ R consists of the interval , where y c ≈ 0.29177.See the left-hand panel of Figure 1.
Lemma 2.1.Assume that Re(y) > 0 and that Y = y 1 3 lies in the open interior of B. Then, there is a well-defined value c = c 1 (y) ∈ C, a smooth function of real variables Re(y) and Im(y) but not analytic in y, such that the following hold.
The zero level set K of Re(h(η, y, c)) consists of the arcs Σ 0,1 and Σ 2,3 , two arcs joining η = is 1 to η = is 2 that bound a region containing η = 0, and two unbounded arcs emanating from η = is 3 , one tending to η = ∞ in the left half-plane and one tending to η = ∞ in the right half-plane.Re(h(η, y, c)) changes sign across each of these arcs except for Σ 0,1 and Σ 2,3 .
If y > 0 and also y < y c ≈ 0.29177 so that Y = y 1/3 lies in B then s 1 < 0 < s 2 < s 3 , Σ h consists of the part of the imaginary axis in the η-plane below the point η = is 3 , and We give the proof in the appendix.The structure of the set K allows the arcs of the jump contour for Riemann-Hilbert Problem 1.1 to be chosen in a useful way, as illustrated in the left-hand panel of Figure 3 for 0 < y < y c .The picture is topologically equivalent provided that Re(y) > 0 and Y = y 1/3 lies in the interior of B as in the conditions of Lemma 2.1.

Introduction of g-function and lens opening
We assume from now on that Re(y) > 0 and that Y = y 1/3 lies in the interior of B. Also, since c = c 1 (y) is determined from y according to Lemma 2.1, we will write h(η, y) = h(η, y, c 1 (y)) going forward.Under these assumptions, in this section we will implement the first two steps of the asymptotic analysis of Riemann-Hilbert Problem 1.1 with y replaced by y + n −1 z.

First step: introduction of g-function
From h(η, y), we define a related function by with Φ(η, y) defined by (1.8).In particular, there is a function g 0 (y) such that We then use g(η, y) to modify the matrix Z (n) η, y + n −1 z by setting Note that while Z (n) η, y + n −1 z depends on (y, z) only through the combination y + n −1 z, the function M (n) (η, y, z) involves these variables in a more complicated fashion.However, as a function of η, M (n) (η, y, z) is analytic where Z (n) (η, y, z) is, and according to (3.1), is normalized to the identity as η → ∞: The sign of Re(h) is as indicated and Re(h) only changes sign across the gray arcs of K.Note that the contour Σ + ∞ actually extends from η = is 2 , taken as the junction point of C − , C + , and Σ + 0 , all the way up the positive imaginary axis, passing through Σ 2,3 .Likewise Σ − ∞ extends from −i∞ up to is 1 .The arc Σ − 0 coincides with the branch cut Σ 0,1 .Center panel: the jump contour for N (n) (η, y, z) has two additional arcs on the left and right of the branch cut Σ 2,3 after opening a lens.Right panel: the jump contour for N(n),out (η, y, z) consists of the arcs Σ − ∞ , Σ 0,1 , Σ + 0 , and Σ 2,3 (shown with solid curves; the dashed arcs in the jump contour for N (n) (η, y, z) have been neglected).

Outer parametrix Riemann-Hilbert problem
In addition to the conditions placed so far on y, we now suppose that z ∈ C is bounded.Then the jump matrices for N (n) (η, y, z) all decay exponentially rapidly to the identity except near the two branch cuts Σ 0,1 and Σ 2,3 of h η (η, y), near the arc Σ − ∞ , and near the arc Σ + 0 .In the branch-cut arcs, the jump conditions read as follows: for η ∈ Σ 2.3 , where the boundary values are determined by orientation of Σ 2,3 toward η = is 3 (y), and for η ∈ Σ 0,1 , where Σ 0,1 is oriented toward η = is 1 (y).Note that the sum of the boundary values of (−iη) −1/2 vanishes on this contour, so as the jump matrix is off-diagonal, the outer exponential factors in (4.2) could have been omitted, but it is convenient to write them here anyway.The jump matrices in (4.1) and (4.2) are rapidly oscillatory in the parameter y, but the only dependence on η enters via the diagonal conjugating factors with exponents proportional to z. Next, there is a residual jump across the contour Σ − ∞ with orientation toward η = is 1 (y) in the limit n → ∞: for η ∈ Σ − ∞ , with the estimate of the error arising in the limit n → ∞ because Re(h) < 0 on Σ − ∞ (see Figure 3).Finally, there is also a residual jump across the contour Σ + 0 with orientation toward η = 0 in the limit n → ∞: − (η, y, z)e z(−iη) −1/2 σ 3 e −inκ(y)σ 3 + exponentially small e −z(−iη) −1/2 σ 3 for η ∈ Σ + 0 .Neglecting the exponentially small terms, (4.1)-(4.3)define the limiting jump conditions to be satisfied by an outer parametrix.The convergence of the jump matrices overall to these three limits is not uniform near the three points η = is j (y), j = 1, 2, 3, and one can install standard inner parametrices near each of these points constructed from Airy functions to correctly approximate N (n) (η, y, z) nearby; see [4,Section 4.4.2]for some details in a very similar setting.However, no inner parametrix is needed near η = 0 if one specifies suitable behavior for the outer parametrix at this point matching that inherited from Z (n) η, y + n −1 z .
By definition, the outer parametrix N(n),out (η, y, z) is then the solution of the following Riemann-Hilbert problem, which retains all of the most important properties of N (n) (η, y, z) when η is bounded away from η = is j (y), j = 1, 2, 3 and builds in the key property near these points needed to facilitate a good match between the outer and inner (Airy) parametrices near those points, namely a negative one-fourth power singularity.
The jump contour for N(n),out (η, y, z) is illustrated in the right-hand panel of Figure 3.This Riemann-Hilbert problem can be solved explicitly, but the construction is not as simple as the above conditions suggest.It involves elliptic integrals on the genus-1 spectral curve and corresponding Jacobi theta functions.Full details of the solution of a similar problem can be found in [1, Section 4.4.2],for instance.The solution formula shows that, given n and y, N(n),out (η, y, z) exists for all z ∈ C except for a doubly periodic lattice of isolated points.However, we will have no need of the resulting complicated formulae in this paper.
Replacing y with y + n −1 z in (1.10) and (1.11), the rescaled algebraic solution Also, writing Z (n) η, y + n −1 z in terms of M (n) (η, y, z) by (3.2) and using the fact that M (n) (η, y, z) = N (n) (η, y, z) identically for η in a neighborhood of the origin, It then follows that To obtain an approximation Ȗn (z; y) of U n y + n −1 z , we replace the expression N (n) (η, y, z) with N(n),out (η, y, z) in this formula and, then using (4.5) we get Ȗn (z; y) Note that by taking the limit η → 0 from the left and right sides of the jump contour through the origin, one obtains two equivalent formulae for the matrix coefficient A (n) 0 (y, z): where the ± signs correspond in the two instances.Accuracy of the approximation of U n y + n −1 z by Ȗn (z; y) in the limit of large n hinges on the details of the analysis of a small-norm Riemann-Hilbert problem for the matrix ratio between N (n) (η, y, z) and its global parametrix.This is important, but it takes us far from our main goal in this work, which is to explain how one can prove, relatively easily and directly from the conditions of Riemann-Hilbert Problem 4.1, that Ȗ (z) = Ȗn (z; y) as defined in (4.6) is an exact solution of the elliptic differential equation (1.6) for a specific choice of the integration constant E as a function of y.

Derivation of the Weierstraß differential equation for Ȗn (z; y)
It is a familiar outcome that various coefficients in the expansion of the solution of a Riemann-Hilbert problem depending on a parameter z satisfy important differential equations.Indeed, this is exactly how one can be sure that Riemann-Hilbert Problem 1.1 generates a rescaled solution of the Painlevé-III (D 7 ) equation (1.1) by formula (1.11) for each n.Such a computation is done in [4, Section 3.2] for a Riemann-Hilbert problem equivalent to Riemann-Hilbert Problem 1.1 but with an unknown denoted W (n) (λ, x).The steps are as follows: One first introduces a diagonal exponential transformation by setting This has the effect of making the induced jump matrices for Ψ (n) (λ, x) arcwise independent of both λ (the complex variable of the Riemann-Hilbert problem) and x (the independent variable of the Painlevé-III (D 7 ) equation in the form (1.1)).
It then follows by differentiation of the jump conditions that the matrices are analytic in λ except at isolated singular points which in this case are λ = ∞ and λ = 0.
By expanding Ψ (n) (λ, x) and its derivatives near the singular points using information from the Riemann-Hilbert problem for W (n) (λ, x), one deduces that both Λ (n) (λ, x) and X (n) (λ, x) are rational functions of λ with principal parts expressed in terms of expansion coefficients of W (n) (λ, x).
Re-arranging the equations (5.1) with this new knowledge, one sees that Ψ (n) (λ, x) satisfies an overdetermined system consisting of two first-order 2×2 linear systems, one with respect to λ and another with respect to x.
Expressing the compatibility condition between the two systems in terms of the elements of the matrices Λ (n) (λ, x) and X (n) (λ, x), one separates out from the various powers of λ a closed system of nonlinear differential equations on the coefficients with respect to x alone.This system implies the Painlevé-III (D 7 ) equation (1.1).
Analogues of these steps are frequently called the dressing method in many papers.
It is a natural expectation that a similar approach might apply to Riemann-Hilbert Problem 4.1 to allow one to deduce a differential equation with respect to z satisfied by Ȗn (z; y).Indeed, the matrix function η → F (n) (η, y, z) := N(n),out (η, y, z)e z(−iη) −1/2 σ 3 (5.2) satisfies modified jump conditions that simply omit the factors e ±z(−iη) −1/2 ∓ σ 3 from the jump matrix.Hence the jump matrices for F (n) (η, y, z) are arcwise independent of both η and z.One can then derive a linear first-order differential equation for F (n) (η, y, z) with respect to z (see Section 5.2 below).However, derivation of a linear first-order differential equation for F (n) (η, y, z) with respect to η is more challenging.One can deduce that F (n) η (η, y, z)F (n) (η, y, z) −1 is rational in η with simple poles at η = 0 and η = is j (y), j = 1, 2, 3, but it turns out that there is not enough information available to deduce fully the residue matrices.Without the first-order system with respect to η one cannot obtain the desired nonlinear differential equation from any compatibility condition.
About a decade ago, we approached Alexander Its with a similar conundrum in the setting of a project to study elliptic function approximations of rational solutions of the second Painlevé equation [2].His advice was to eschew the undetermined Fuchsian linear system with respect to the Riemann-Hilbert complex variable (spectral parameter) in favor of a remarkable algebraic identity satisfied by the matrix solutions of Riemann-Hilbert problems whose jump matrices have a certain structure.Expanding this identity with respect to the spectral parameter produces numerous identities among functions of the independent variable alone that serve to close the system of differential equations; squaring it produces a scalar identity that links the spectral curve and the target differential equation.
The jump matrices of Riemann-Hilbert Problem 4.1 have the necessary structure for this method to apply.In the rest of this section, we implement the method and show how it yields the expected differential equation (1.6).Specifically, we prove the following.where c = c 1 (y) is the smooth but non-analytic function of y defined in Lemma 2.1.
Remark 5.2.This result shows that the first-order autonomous differential equation (1.6), which is now well-defined given y as in the theorem statement, is solved by the approximation Ȗn (z; y), which also depends on the index n ∈ Z >0 .However, the space of solutions of the differential equation is mapped out by translations in z, and the particular translate needed to identify Ȗn (z; y) will generally depend on n and is not specified by Theorem 5.1.
Remark 5.3.Theorem 5.1 shows that the (scaled) algebraic function n −1/2 u n n 3/2 y + n −1 z , which has a finite number of poles, is well approximated in its pole region as n → ∞ by a solution of the Weierstraß equation in the form (1.6) having an infinite number of poles.Interestingly, the same Weierstraß equation has recently been shown to govern large-x asymptotic behavior of general (non-algebraic) solutions of the Painlevé-III (D 7 ) equation ( 1.1) by Shimomura [14].
Now we continue with the proof of Theorem 5.1.An elementary example illustrating the basic steps in the method we use can be found in Appendix A.

Expansion of N(n),out (η, y, z) near η = ∞
It is easy to see from the jump condition (4.4) that if y, z, and n are such that N(n),out (η, y, z) exists, the product F(η) = F (n) (η, y, z) defined by (5.2) is analytic for large η and decays to I as η → ∞.Therefore, there are matrix coefficients This immediately implies that N(n),out (η, y, z) has an expansion in nonnegative integer powers of (−iη that is convergent for |η| large enough.By matching the coefficients of like powers of −iη in the expansions (5.4)-( 5.5), one can express the coefficients C m/2 = C (n) m/2 (y, z) in terms of the F j .Therefore, we find and so on.Eliminating F 1 in favor of C 1 , we can also write (5.7) Remark 5.4.Similar analysis of the growth and jump conditions of Riemann-Hilbert Problem 4.1 near η = 0 shows that the product N(n),out (η, y, z)e ng(η,y)σ 3 e −inησ 3 E • (−iη) −(−1) n σ 3 /4 is analytic at η = 0.This implies that the series in (4.5) is convergent, and that "∼" can be written instead as "=".
5.2 Differential equations in z 5.2.1 Lax equation satisfied by F (n) (η, y, z) Assuming it exists, the matrix function η → F (n) (η, y, z) defined in (5.2) above satisfies modified jump conditions with jump matrices that are independent of z (and also of η, but we will not use that).Then by standard arguments, F z (η, y, z)F (n) (η, y, z) −1 is analytic in η except possibly at η = 0, ∞.In terms of the outer parametrix, we have from (5.2) The expansion (5.5) is differentiable term-by-term with respect to z, and therefore where we used C 1/2 = −zσ 3 .Also, directly from (5.5), Therefore, F z (η, y, z)F (n) (η, y, z) −1 = O (−iη) −1 as η → ∞.Likewise, using (4.5) we get , as η → 0, and in the same limit But, using the identity 0 , the central factor becomes where U := 0 e −5πi/3 0 0 and L := 0 0 e 5πi/3 0 . Therefore, (5.9)It will also be convenient later to define the following related matrix (5.10)These definitions imply that and that Note that, using (4.6) and det(A 0 ) = 1, whether n is even or odd one obtains the same formula for Ȗn in terms of T: (5.13) Comparing the expansions as η → ∞ and as η → 0 and multiplying on the right by F (n) (η, y, z), we obtain the differential equation

Implied differential equations for coefficients
Combining (5.2) and (5.14), one obtains a differential equation for N(n),out (η, y, z), namely Multiplying (5.15) on the right by e ng(η,y)σ 3 e −inησ 3 E(−iη) −(−1) n σ 3 /4 and using (4.5) gives the convergent (by Remark 5.4) series Multiplying the definition (5.17) of G (n) (η, y, z) on the right by N(n) (η, y, z), we insert the expansions (5.4) and (5.18) and equate the coefficients of like powers of (−iη) −1 to obtain a hierarchy of equations: and so on.The first, third, and fifth equations give, in order, According to (5.6) and (5.7), the second and fourth equations are trivial identities.Likewise, G (n) (η, y, z) has a Laurent expansion about η = 0 of the form Multiplying the definition of G (n) (η, y, z) on the right by N(n),out (η, y, z) E • (−iη) −(−1) n σ 3 /4 and using the expansions (4.5) and (5.18) again gives a hierarchy of equations.To see them, first we expand the left-hand side: Then we expand the right-hand side: Recalling the definitions (5.9) and (5.10), one sees that 5.4 Scalar identity and completion of the proof of Theorem 5.1 The most remarkable identity stemming from the definition of G (n) (η, y, z) comes from σ 2 3 = I which implies that G (n) (η, y, z) 2 is the scalar (independent of both n and z, and rational in η) R(η, y) 2 I.We can write G (n) (η, y, z) in the form Therefore, using T 2 = 0, Using (5.19) and the fact that [C 1 , σ 3 ] is off-diagonal, this becomes Using tr(T) = 0, we verify that the coefficient of (−iη) −2 is also a multiple of the identity, and therefore Then using this with η = iy/(4T where we used det(T) = tr(T) = 0.But now, using the (1, 1)-entry of the identity (5.19) shows that In other words, recalling from (5.13) that T 11 = Ȗ = Ȗn (z; y), we have shown that (5.21) Comparing (1.6) and (5.21) shows that Ȗ (z) = Ȗn (z; y) satisfies the expected differential equation equivalent to the Weierstraß equation (1.7) with constant of integration E connected to y via (5.3), which completes the proof of Theorem 5.1.
Remark 5.5.It is an interesting coincidence that the cubic polynomial in Ȗ appearing in the differential equation (5.21) is related to the rational function R(η, y)2 = f (η, y, c 1 (y)) defining the underlying spectral curve (see (2.1)) by η = iy/(4 Ȗ ).Similar correspondences have been noted with each application of this method; see [2] for the original application to Painlevé-II, [1] for an application to Painlevé-III (D 6 ), and [3] for an application to Painlevé-IV.

A Elementary illustration of the method
In this appendix we illustrate the method of proof of Theorem 5.1 with a toy Normalization: N(η, z) → I as η → ∞.
Endpoint behavior: η → N(η, z) is allowed to blow up like a negative one-fourth power near each endpoint η = ±1.
It is easy to check that this problem has a unique solution for every z ∈ C given explicitly by where the diagonal matrix power is defined as the principal branch, R(η) 2 = η 2 − 1, and R(η) then it is straightforward to obtain from (A.1) that In particular, this implies that the diagonal elements of N(1) (z) satisfy simple differential equations: Proof .Suppose that the roots η = is j , j = 1, 2, 3 are distinct, and that the Boutroux conditions (2.2) hold (this will be justified later via a continuation argument).With an integration constant selected so that Re(h(η, y, c)) = 0 at any one of the roots, the zero level set K of Re(h(η, y, c)) is well defined and it contains the closure K ′ of the union of critical trajectories emanating from the points η = is j , j = 1, 2, 3, which are the curves along which f (η, y, c) dη 2 < 0, where f is defined by (2.1).We claim that K ′ has the following properties.
K ′ is a connected set consisting of six simple arcs pairwise disjoint except for their endpoints: -One arc joining the origin η = 0 to one of the three points that we label as η = is 1 .
We take this arc to be the branch cut Σ 0,1 .-One arc joining the other two points, η = is 2 and η = is 3 .We take this arc to be the branch cut Σ 2,3 .-Two arcs joining η = is 1 either to the same point that we label as η = is 2 (case (i)) or one each to η = is 2 and η = is 3 (case (ii)).-Two unbounded arcs tending to η = ∞ parallel to the real line, one in the right half-plane and one in the left half-plane.
In case (i), the region bounded by the two arcs joining η = is 1 with η = is 2 contains the origin η = 0 and both unbounded arcs emanate from η = is 3 .In case (ii), the region bounded by the arc joining η = is 1 with η = is 2 , the arc joining η = is 1 with η = is 3 , and the arc Σ 2,3 contains the origin η = 0 and one unbounded arc emanates from each of η = is 2 and η = is 3 .
Locally, near each of the simple roots η = is j , j = 1, 2, 3, of η → f (η, y, c), K ′ consists of a union of three trajectories emanating from η = is j in directions separated by equal angles of 2π/3.Given an index j = 1, 2, 3, each of the three trajectories emanating from η = is j terminates in the other direction at η = is 1 , η = is 2 , η = is 3 , η = 0, or is unbounded in which case it tends to η = ∞ asymptotically horizontally.This is because otherwise the trajectory would be divergent and hence recurrent [15,Theorem 11.1].But the closure of a recurrent trajectory contains a nonempty domain in C and since Re(h(η, y, c)) = 0 on the trajectory, this harmonic function would vanish identically on R (the Riemann surface of h η (η, y, c), i.e., the spectral curve), which is a contradiction because h η (η, y, c) is not identically zero.Taking into account the Boutroux conditions (2.2) which imply that K ′ ⊂ K, similar local analysis shows that there can be at most one critical trajectory terminating at η = 0 and at most one unbounded critical trajectory tending horizontally to η = ∞ in each of the left and right half-planes.
To work out the global trajectory structure in order to prove the claim, it is easiest to first assume that y > 0 and c ∈ R, in which case it is easy to see that h η (−η * , y, c) = h η (η, y, c) * provided that Σ 0,1 and Σ 2,3 are taken to be symmetric in the imaginary η-axis, which we will also assume.Moreover we either have (making a choice of labeling of the roots of P ) s 1 < 0 < s 2 < s 3 or s 1 < 0 and s 3 = s * 2 .In either configuration the condition I 2,3 = 0 holds automatically, and c ∈ R is presumed to be determined from the remaining real condition I 1,2 = 0. We examine the two configurations in turn.
If s 1 < 0 < s 2 < s 3 , since f (η, y, c) = h η (η, y, c) 2 > 0 holds for η ∈ iR between η = is 1 and η = 0 as well as between η = is 2 and η = is 3 , these intervals of the imaginary axis are critical trajectories.Since elsewhere on the imaginary axis we have f (η, y, c) = h η (η, y, c) 2 < 0, Re(h(η, y, c)) is strictly monotone as η varies in these intervals of iR.Therefore, in this configuration there are no points of either K or K ′ ⊂ K on the imaginary axis outside the two critical trajectories.Since K ′ is symmetric in reflection through the imaginary axis, and since exactly one critical trajectory goes to η = ∞ in each half-plane, there are only three possibilities: The two remaining trajectories emanating from η = is 1 tend to infinity in opposite halfplanes, and there is a symmetric pair of arcs in each half-plane joining the points η = is 2 and η = is 3 .However, since Re(h(η, y, c)) is harmonic between the imaginary axis and each of these arcs and vanishes on each critical trajectory, this would imply by the maximum principle that Re(h(η, y, c)) ≡ 0 in each of these domains.This is a contradiction since h η (η, y, c) does not vanish identically.
The two remaining trajectories emanating from η = is 2 tend to infinity in opposite halfplanes, and there is a symmetric pair of arcs in each half-plane joining the points η = is 1 and η = is 3 .However, this would imply a crossing of two different trajectories at a point in each half-plane where f (η, y, c) is finite and nonzero, which cannot occur.
Therefore, the remaining possibility must hold, namely that the two remaining trajectories emanating from η = is 3 tend to infinity in opposite half-planes, and there is a symmetric pair of arcs in each half-plane joining the points η = is 1 and η = is 2 .This shows that the claimed structure holds in case (i) when s 1 < 0 < s 2 < s 3 .
If instead s 1 < 0 and s 3 = s * 2 , since f (η, y, c) = h η (η, y, c) 2 > 0 holds for η ∈ iR between η = is 1 and η = 0, this interval of the imaginary axis is a critical trajectory, while in the intervals between η = −i∞ and η = is 1 and between η = 0 and η = +i∞ we have f (η, y, c) = h η (η, y, c) 2 < 0 so Re(h(η, y, c)) is strictly monotone.This implies that there are no points of either K or K ′ ⊂ K on the imaginary axis below η = is 1 , but because Re(h(η, y, c)) necessarily changes sign on the positive imaginary axis due to the singularity at the origin and the linear growth at infinity there is exactly one point of K there, which may belong to K ′ .In fact, this point does indeed belong to K ′ , because otherwise at least two of the critical trajectories emanating from each of the points η = is 2 and η = is 3 = −(is 2 ) * must tend to infinity in the half-plane containing the point because only one of them can terminate at η = is 1 ; this contradicts the fact that exactly one critical trajectory tends to infinity in each half-plane.So the distinguished point in the imaginary interval between η = 0 and η = +i∞ belongs to K ′ and lies on a critical trajectory crossing the imaginary axis horizontally and connecting η = is 2 and η = is 3 = −(is 2 ) * .The remaining two trajectories emanating from each of these points necessarily tend to η = is 1 and η = ∞ without crossing.This shows that the claimed structure of K ′ holds in case (ii) when s 1 < 0 and s 3 = s * 2 .We next show that whenever 0 < y < y c ≈ 0.29177, where y c is the critical value defined in [4,Section 4.6], there exists a unique c = c 1 (y) ∈ R for which the conditions (2.2) (really just I 1,2 = 0 as I 2,3 = 0 is automatic) hold with root configuration s 1 < 0 < s 2 < s 3 and hence K ′ has the claimed structure in case (i).To do this, we first suppose that y > 0 and choose c ∈ R differently, so that the cubic µ → P (µ, y, c) defined in (2.1) has a simple root µ = s and a double root µ = d.Then by setting P (µ, y, c) = −(µ − d) 2 (µ − s) one sees that d = (1 − s)/2 and that s satisfies the cubic equation s(s−1) 2 = −y 2 , while c = −d 2 −2ds = 1 4 (3s 2 −2s−1).The condition y > 0 implies that the equation s(s−1) 2 = −y 2 has one real and two complex-conjugate solutions for s.But if s = u + iv with v ̸ = 0, then Im(c) = 1 2 (3u − 1)v which vanishes for v ̸ = 0 only if u = 1 3 .Then Im y 2 = − Im s(s − 1) 2 = v 3 ̸ = 0 which contradicts y > 0. Therefore, the conditions y > 0 and c ∈ R require that we select the real root s = s(y) < 0 of s(s − 1) 2 = −y 2 and then d = d(y) > 1  2 is also real and c = c 0 (y) is a corresponding well-defined real number.In this double-root configuration, the function η → h η (η, y, c 0 (y)) is analytic except on the imaginary segment between η = is(y) and η = 0. Choosing an integration constant so that Re(h(η, y, c 0 (y))) = 0 for η = is(y), the function Re(h(η, y, c 0 (y))) is well defined by contour integration and it is harmonic except on the branch cut for h η (η, y, c 0 (y)).According to [4,

1
Algebraic solutions of the Painlevé-III (D 7 ) equation The Painlevé-III (D 6 ) equation for a function u : C → C, x → u(x) is

Figure 1 .
Figure 1.Left panel: a density plot of U 10 Y 3 and the boundary of the "bow-tie" region B. Right panel: a similar plot of |U 10 (y)| on the principal sheet of the y-plane with −π < arg(y) < π and the sheet boundary (branch cut) shown with a red line.In both plots, lighter/darker color indicates larger/smaller modulus.
and C ± be smooth, pairwise disjoint, oriented open arcs in the η-plane as shown in Figure 2. The important properties of these arcs are the following.Σ + 0 terminates at the origin and Σ − 0 originates from the origin tangent to the line making the angle 2 arg(y) with the vertical.Σ + ∞ terminates vertically at η = i∞ and Σ − ∞ originates vertically from η = −i∞.Σ + ∞ , Σ + 0 , C + , and C − share a common initial point.Σ − ∞ , Σ − 0 , C + , and C − share a common terminal point.
-Hilbert Problem 1.1 (scaled algebraic Painlevé-III (D 7 ) solutions, [4, Section 4.1]).Let y ∈ C with | arg(y)| ≤ π and n ∈ Z >0 be given.Seek a 2 × 2 matrix function η → Z (n) (η, y) with the following properties: y) takes continuous boundary values on the jump contour except at η = 0, and these boundary values are related by the jump conditions ) with fractional powers of −iny defined by continuation of the principal branch for y > 0.Then, a rescaling of the algebraic solution of Painlevé-III (D 7 ) is encoded in the solution of Riemann-Hilbert Problem 1.1 by the formula U n (y) := n −1/2 u n n 3/2 y = ny e −5πi/6 B

Figure 3 .
Figure3.Jump contours and sign chart of Re(h) in the η-plane for 0 < y < y c .Left panel: the zero level set K of Re(h(η, y, c)) shown in gray and orange (orange indicates the branch cuts Σ 0,1 and Σ 2,3 of h η (η, y, c)), and the relative placement of the jump contour for Riemann-Hilbert Problem 1.1.The sign of Re(h) is as indicated and Re(h) only changes sign across the gray arcs of K.Note that the contour Σ + ∞ actually extends from η = is 2 , taken as the junction point of C − , C + , and Σ + 0 , all the way up the positive imaginary axis, passing through Σ 2,3 .Likewise Σ − ∞ extends from −i∞ up to is 1 .The arc Σ − 0 coincides with the branch cut Σ 0,1 .Center panel: the jump contour for N (n) (η, y, z) has two additional arcs on the left and right of the branch cut Σ 2,3 after opening a lens.Right panel: the jump contour for N(n),out (η, y, z) consists of the arcs Σ − ∞ , Σ 0,1 , Σ + 0 , and Σ 2,3 (shown with solid curves; the dashed arcs in the jump contour for N (n) (η, y, z) have been neglected).

Theorem 5 . 1 .
Fix n ∈ Z >0 and y with Re(y) > 0 and Y = y 1/3 in the interior of B. Then the function z → Ȗn (z; y) defined from the solution of Riemann-Hilbert Problem 4.1 by (4.6) is a solution of the differential equation(1.6), in which the integration constant E is given by