title
sequencelengths
0
18
author
sequencelengths
0
4.41k
authoraffiliation
sequencelengths
0
6.45k
venue
sequencelengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
sequencelengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
sequencelengths
0
36
[ "ON QUASI-ISOMETRY AND CHOICE", "ON QUASI-ISOMETRY AND CHOICE" ]
[ "Samuel M Corson " ]
[]
[]
In this note we prove that the symmetry of the quasi-isometry relation implies the axiom of choice, even when the relation is restricted to geodesic hyperbolic spaces. We show that this result is sharp by demonstrating that symmetry of quasi-isometry in an even more restrictive setting does not imply the axiom of choice. The "Bottleneck Theorem" of Jason Fox Manning [Ma] also implies choice.2010 Mathematics Subject Classification. 03E25.
null
[ "https://arxiv.org/pdf/1609.01353v1.pdf" ]
119,127,317
1609.01353
ab1807e618fc25d8a5431dafe48ca8de51740ae0
ON QUASI-ISOMETRY AND CHOICE 6 Sep 2016 Samuel M Corson ON QUASI-ISOMETRY AND CHOICE 6 Sep 2016 In this note we prove that the symmetry of the quasi-isometry relation implies the axiom of choice, even when the relation is restricted to geodesic hyperbolic spaces. We show that this result is sharp by demonstrating that symmetry of quasi-isometry in an even more restrictive setting does not imply the axiom of choice. The "Bottleneck Theorem" of Jason Fox Manning [Ma] also implies choice.2010 Mathematics Subject Classification. 03E25. Introduction A standard benchmark for the deductive strength of a theorem is that it implies the axiom of choice (see for example [K], [HL], [Hod], [B], [How]). By this we mean that by assuming the Zermelo-Fraenkel axioms of set theory (without the axiom of choice) and the theorem under consideration, the axiom of choice can be deduced (see [TZ] for a listing of these axioms). Let ZF denote the Zermelo-Fraenkel axioms without the axiom of choice. The axiom of choice cannot be proved from ZF (see [C]). Thus if a theorem implies choice then a proof of the theorem requires choice or some stronger assumption in set theory. We demonstrate the deductive strength of two theorems in metric space theory. We start with some definitions. If (S, d S ) and (T, d T ) are metric spaces, a function f : S → T is a quasi-isometry if there exists N ∈ ω such that B(f (S), N ) = T and for all x, y ∈ S, 1 N d S (x, y) − N ≤ d T (f (x), f (y)) ≤ N d S (x, y) + N where B(J, p) is the closed neighborhood {x ∈ T : d T (x, J) ≤ p}. This definition differs slightly from the standard one which uses two or three parameters (e.g. Definition 8.14 in [BH]) but our definition is easily seen to be equivalent. The notion of quasi-isometry is extensively used in geometric group theory (see for example [G1], [G2]). In case there exists a quasi-isometry from S to T we say S is quasi-isometric to T . It is easy to see that the quasi-isometry relation is reflexive and transitive (in particular this can be proven without using the axiom of choice). It is a standard exercise to prove that the quasi-isometry relation is symmetric, and we shall see that the proof unavoidably utilizes the axiom of choice. That isthe symmetry of the quasi-isometry relation on metric spaces implies the axiom of choice (see Corollary 2). It is natural to ask whether the symmetry of quasi-isometry in more restrictive settings implies choice. Recall that a metric space S is geodesic if for any two points x, y ∈ S there is an isometric embedding ρ : [0, d S (x, y)] → S with ρ(0) = x and ρ(d S (x, y)) = y (the image of which is called a geodesic segment). Geodesic segments need not be unique, but a choice of geodesic segment for points x, y ∈ S will be denoted [x, y]. A geodesic space S is δ-hyperbolic if for any x, y, z ∈ S we have [x, y] ⊆ B([x, z] ∪ [y, z], δ) and is hyperbolic if it is δ-hyperbolic for some δ. Although there is a definition for hyperbolicity in a non-geodesic setting (involving the Gromov product), all hyperbolic spaces in this paper will be geodesic. Obviously δ 1 -hyperbolicity implies δ 0 -hyperbolicity if δ 0 ≥ δ 1 . If S is δ-hyperbolic then by scaling the metric by λ > 0 one sees that S is quasi-isometric (via the identity map) to a λδ-hyperbolic space. A 0-hyperbolic space is more commonly called an R-tree, and 0-hyperbolicity implies that geodesics are unique. Let SQHS (symmetry of quasi-isometry on hyperbolic spaces) denote the assertion that if hyperbolic space S is quasi-isometric to hyperbolic space T , then T is quasi-isometric to S. We prove the following: Theorem 1. SQHS implies the axiom of choice. From Theorem 1 one immediately obtains the result mentioned earlier: Corollary 2. The symmetry of quasi-isometry on metric spaces implies the axiom of choice. One can ask whether Theorem 1 can be strengthened by further restricting the symmetry of quasi-isometry to a smaller class of metric spaces. The most natural choice would be the restriction to the class of R-trees. It turns out that this restriction is too narrow to imply the axiom of choice. In particular we have the following, which demonstrates the sharpness of Theorem 1: Theorem 3. The symmetry of the quasi-isometry relation between R-trees follows from ZF alone. In fact we prove the stronger claim that if f : T → S is a quasi-isometry with T an R-tree then there exists a quasi-isometry h : S → T . This highlights the contrast between 0-hyperbolic spaces and those which are simply δ-hyperbolic with δ > 0 (and therefore quasi-isometric to a δ ′ -hyperbolic space for any δ ′ > 0 via scaling). We move on to another result in quasi-isometry. In [Ma] Jason Fox Manning proved the following (Theorem 4.6), which we call the Bottleneck Theorem: Theorem. (J. F. Manning) Let Y be a geodesic metric space. The following are equivalent: (1) There exists a simplicial tree Γ which is quasi-isometric to Y . (2) There is some ∆ > 0 so that for all x, y ∈ Y there is a midpoint m = m(x, y) with d(x, m) = d(y, m) = 1 2 d(x, y) and the property that any path from x to y must pass within less than ∆ of the point m. Part (1) was originally expressed "Y is quasi-isometric to some simplicial tree." However what is exhibited in Manning's proof is a simplicial tree which is quasiisometric to Y , so in light of Theorem 1 we express part (1) as we do. The final main result of this note is the following: The proofs of Theorems 1 and 4 are similar and will involve a construction given in Section 2. All of the theorems will then be proved in Section 3. We use the following formulation of the axiom of choice: If Z is a nonempty set consisting of pairwise disjoint nonempty sets then there exists a set A such that A ∩ X has cardinality one for all X ∈ Z. The Graph Γ 0 The construction and proofs in this section will all be carried out in ZF, without using any other assumptions. Let Z be a nonempty collection of pairwise disjoint nonempty sets. We construct a graph Γ 0 = Γ 0 (V, E) with labelled edges. For our set of vertices we take V (Γ 0 ) = (( Z) × (ω \ {0})) ∪ {b} where b / ∈ Z. The graph Γ 0 will have no edge from a vertex to itself. Between two distinct vertices there will either be no edge or two edges (with one edge labeled by one vertex, and the other edge labeled by the other vertex). If there exists an edge between the points v 0 , v 1 ∈ V we let E({v 0 , v 1 }, v 0 ) denote the edge between the two vertices which is labeled by v 0 and similarly for E({v 0 , v 1 }, v 1 ). For distinct vertices v 0 and v 1 let E({v 0 , v 1 }, v 0 ) and E({v 0 , v 1 }, v 1 ) be in E(Γ 0 ) if one of the following holds: (1) v i = b and v 1−i = (y, 1) for some y ∈ Z (2) v i = (x i , n) and v 1−i = (x 1−i , m) for some X ∈ Z; x i , x 1−i ∈ X; and m, n ∈ ω \ {0} with |m − n| ≤ 1 In other words, for each X ∈ Z and n ≥ 1 the induced subgraph on (X × {n}) ∪ (X × {n + 1}) is complete with two edges connecting each pair of distinct vertices, and each vertex in ( Z)×{1} shares two edges with the vertex b. We now consider Γ 0 as a metric graph as follows. Where there is an edge E({v 0 , v 1 }, v 0 ), we attach a compact interval [− 1 2 , 1 2 ] with − 1 2 being identified with v 0 and 1 2 being identified with v 1 . Endow Γ 0 with the metric given by letting the distance d 0 between two points be the minimal length of the path needed to connect them by moving along the attached metric intervals. It is clear that Γ 0 is path connected and the path metric defined above makes Γ 0 a geodesic metric space (with geodesics not being unique in general). For each point y ∈ Γ 0 (not necessarily a vertex) we define the level Lev(y) by Lev(y) = ⌊d 0 (y, b)⌋. Here ⌊·⌋ denotes the floor function. In particular Lev((x, n)) = n for any (x, n) ∈ V (Γ 0 ). Define the base of Γ 0 to be the set of all points of level 0 (i.e. those points of distance < 1 from b). For each X ∈ Z define the X-arm to be the set of points of level ≥ 1 which are distance at most 1 2 away from a vertex (x, n) ∈ V (Γ 0 ) with x ∈ X ∈ Z and n ≥ 1. Thus each point in Γ 0 is either in the base or is in an X-arm for exactly one X ∈ Z (uniqueness of the arm follows from the pairwise disjointness of the elements of Z). We prove some lemmas which will aid us in proving the main proposition of this section. Lemma 5. The base, as well as each level of each arm, is of diameter ≤ 2. Proof. Notice that each element of the base is distance < 1 away from the point b, whence the bound on the diameter of the base follows. Each point in Γ 0 is at most distance 1 2 away from some vertex. If elements y 0 , y 1 are in the X-arm and are of level n, then in particular each y i is distance ≤ 1 2 away from a vertex v i with second coordinate n or n + 1. Then there is a path from y 0 to v 0 of length ≤ 1 2 , a path from v 0 to v 1 of length 0 or 1, and a path from v 1 to y 1 of length ≤ 1 2 . Lemma 6. If ρ : [0, r] → Γ 0 is a geodesic let ρ −1 (V (Γ 0 )) = {a 0 , . . . , a k } with a i+1 = a i + 1. Then |a 0 − 0|, |r − a k | < 1. If Lev(ρ(a i )) is constant then k ≤ 1. If k ≥ 2 then the sequence Lev(ρ(a i )) is either increasing, or decreasing, or decreasing to 0 and then increasing (by 1 unit in each case). Proof. That |a 0 −0|, |r−a k | < 1 holds is clear. Certainly | Lev(ρ(a i ))−Lev(ρ(a i+1 ))| ≤ 1. If Lev(ρ(a i )) = Lev(ρ(a i+1 )) then ρ may not pass through any other vertices since in particular there would be a j with j = i, i+1 and | Lev(ρ(a j ))−Lev(ρ(a i ))| ≤ 1, so that d 0 (ρ(a j ), ρ(a i )), d 0 (ρ(a j ), ρ(a i+1 )) ≤ 1. Thus in this case i = 0 and i + 1 = k = 1. Notice that ρ(a i ) and ρ(a j ) cannot both be in the same level on the same arm, or both be in the base, if k ≥ 2 and i = j (by a similar argument). Supposing that k ≥ 2 we therefore know that either the sequence Lev(ρ(a i )) increases by 1 at each index; or decreases by 1 at each index; or decreases to 0 by 1 at each index, after which ρ moves up another arm and Lev(ρ(a i )) increases by 1 at each index. Lemma 7. Suppose w ∈ [x, y] ⊆ Γ 0 . Then any path from x to y must pass within distance 2 of w. Proof. The claim is trivial if either d(x, w) ≤ 2 or d(y, w) ≤ 2. Otherwise, let ρ : [0, d(x, y)] → Γ 0 be the geodesic associated with the segment [x, y] with ρ(0) = x. Let a 0 , a 1 , . . . , a k be the sequence described in Lemma 6 and let 0 ≤ j ≤ k be such that w ∈ [ρ(a j ), ρ(a j+1 )] \ {ρ(a j+1 )} ⊆ [x, y]. Since d 0 (w, x), d 0 (w, y) > 2 we know that 1 ≤ j ≤ k − 2. By Lemma 6 it is either the case that Lev(ρ(a j )) is increasing, or decreasing, or decreasing to 0 and then increasing, by increments of 1. In case Lev(ρ(a i )) is increasing, we know that x is either in the base or in the same arm as w and y, with Lev(y) > Lev(w) > Lev(x), and removing B(w, 2) from the arm removes all vertices from that arm of level Lev(ρ(a j )) (since all such vertices are distance 1 from each other and d 0 (ρ(a j ), w) < 1). There is no combinatorial path in Γ 0 from ρ(a 0 ) to ρ(a k ) which does not pass through a vertex of level Lev(ρ(a j )), so no path from x to y avoids B(w, 2). Similar proofs for Lev(ρ(a i )) decreasing and Lev(ρ(a i )) decreasing to 0 and then increasing prove the claim. Lemma 8. The geodesic space Γ 0 is 2-hyperbolic. Proof. Let x, y, z ∈ Γ 0 be given. Lemma 9. Given x, y ∈ Γ 0 there exists a point m with d 0 (m, x) = d 0 (m, y) = 1 2 d 0 (x, y) such that any path from x to y must pass within less than ∆ = 3 of the point m. Proof. Let [x, y] be any geodesic, let m ∈ [x, y] be the point such that d 0 (m, x) = d 0 (m, y) = 1 2 d 0 (x, y). Applying Lemma 7 with m = w we know that any path from x to y must pass within distance 2 of m, so any path from x to y must pass within distance less than 3 of m. Lemma 10. If x, y ∈ Γ 0 and L > 0 then there exists z ∈ V (Γ 0 ) such that d 0 (x, z) > L, d 0 (y, z) > L, Lev(z) > L, and any path from x to z comes within distance 4 of y. Proof. The claim is straightforward to prove if d 0 (x, y) ≤ 4, so assume d 0 (x, y) > 4. Then x and y cannot both be in the base or both be on the same level of the same arm by Lemma 5. Let L 0 ∈ ω with L 0 > L + Lev(y) + Lev(x) + 2. If x, y lie on the same arm with Lev(x) > Lev(y), or if y is in the base, then let z ∈ V (Γ 0 ) be on a different arm from x with Lev(z) ≥ L 0 . Then certainly d 0 (x, z) > d(y, z) ≥ Lev(z) − 1 > L. Letting v ∈ V (Γ 0 ) be the unique point on the same arm as y (or v = b in case y is in the base) satisfying Lev(v) = Lev(y) and v ∈ [x, z], we have by Lemma 7 that any path from x to z must pass within distance 2 of v, and since d 0 (v, y) ≤ 2 we know any path from x to z must pass within distance 4 of y. Else, select z ∈ V (Γ 0 ) with Lev(z) ≥ L 0 and with z on the same arm as y. Then Lev(z) > L and d 0 (x, z) ≥ d 0 (b, z) − d 0 (x, b) ≥ Lev(z) − (Lev(x) + 2) > L, and d 0 (y, z) > L follows as well. As before, y is within distance 2 of any geodesic segment [x, z] and we argue similarly to obtain the same conclusion. The proofs of Theorems 1 and 4 hinge on the following property of Γ 0 : Proposition 11. If g : Γ → Γ 0 is a quasi-isometry from a simplicial tree Γ then there exists a set A such that A ∩ X has cardinality 1 for all X ∈ Z. We spend the remainder of this section proving Proposition 11. Let g : Γ → Γ 0 be a quasi-isometry, with Γ a simplicial tree with simplicial metric d. Let N ≥ 4 be an associated quasi-isometry constant for g. We describe a pruning process for the tree Γ. Call a vertex w ∈ V (Γ) terminal if w is of valence one. Let Γ (1) = Γ ′ be the tree Γ with all terminal vertices and adjoining edges removed, and in general let Γ (n+1) = (Γ (n) ) ′ . It is clear that for all n the graph Γ (n) is also a simplicial tree. Also, for n ≥ 2 and w ∈ V (Γ (n) ) \ V (Γ (n−1) ) there exists some w ′ ∈ V (Γ (n−1) ) \ V (Γ (n−2) ) which is adjacent to w. One can show by induction on n that given adjacent vertices w, w ′ such that w ∈ V (Γ (n) ) \ V (Γ (n−1) ) and w ′ ∈ V (Γ (n−1) ) \ V (Γ (n−2) ), there is no geodesic segment γ starting at w of length greater than n with w ′ ∈ γ. We claim that the pruning process on Γ stabilizes. More concretely, letting K = 7N 2 we have the following: Lemma 12. Γ (K) = Γ (K+1) Proof. Suppose for contradiction that w ∈ V (Γ (K) ) \ V (Γ (K+1) ). Let w 0 = w and pick w 1 ∈ V (Γ (K−1) ) \ V (Γ (K) ) which is adjacent to w. Continue in this manner so that w n ∈ V (Γ (K−n) ) \ V (Γ (K−n+1) ) is adjacent to w n−1 for n ≤ K. Then w K ∈ V (Γ) \ V (Γ (1) ) is distance K away from w. Suppose that w ′ ∈ Γ is distance at least 2K from w K and suppose for contradiction that w / ∈ [w K , w ′ ]. Let n ∈ ω be least such that w n ∈ [w K , w ′ ], and by assumption n ≥ 1. Notice that d(w n , w ′ ) ≥ 2K − (K − n) = K + n. But the geodesic segment [w, w ′ ] = [w, w n ] ∪ [w n , w ′ ] gives a geodesic beginning at w, passing through w 1 which is of length ≥ n + (K + n) > K, a contradiction. Thus d(w ′ , w K ) ≥ 2K implies w ∈ [w K , w ′ ]. Notice that d 0 (g(w), g(w K )) ≥ 1 N d(w, w K )−N = K N −N = 6N . Letting x = g(w) and y = g(w K ) we pick a z ∈ Γ 0 as in Lemma 10 with L = (2K + 1)N + N 2 . Select w ′ ∈ Γ such that d 0 (g(w ′ ), z) ≤ N . Now d(w ′ , w K ) ≥ 1 N d 0 (g(w ′ ), g(w K )) − 1 > 1 N (d 0 (z, g(w K )) − N ) − N ≥ 1 N (L − N ) − N ≥ (2K + N ) − N = 2K Letting w = v 0 , v 1 , . . . , v p be the vertices in Γ on the geodesic from w to w ′ , listed in increasing distance from w, we know that d( v i , w K ) ≥ K for all i. Then d 0 (g(v i ), g(w K )) ≥ 6N . Also, d 0 (g(v i ), g(v i+1 )) ≤ 2N for all i and d 0 (g(v p ), g(w ′ )) ≤ 2N . Notice that [g(v 0 ), g(v 1 )] ∪ [g(v 1 ), g(v 2 )] ∪ · · · ∪ [g(v p−1 ), g(v p )] ∪ [g(v p ), g(w ′ )] ∪ [g(w ′ ), z] gives a path from g(w) to z which must come within distance 4 of g(w K ). But this requires for some i the inequality d 0 (g(v i ), g(w K )) ≤ 4 + 4N , a contradiction. It is clear that the subtree Γ (K) satisfies B(Γ (K) , K) = Γ, and so the inclusion map Γ (K) → Γ is a quasi-isometry, and composing g with inclusion gives a quasiisometry from Γ (K) to Γ 0 (that the composition of quasi-isometries is a quasiisometry is easily provable in ZF). Thus we may assume without loss of generality that Γ has only vertices of valence 2 or greater. Let K = 7N 2 as before. Fix once and for all a vertex v ∈ V (Γ) such that d 0 (g(v), b) ≤ 3N . Such a selection of v is possible by choosing w ∈ Γ with d 0 (g(w), b) ≤ N and a vertex v ∈ V (Γ) with d(v, w) ≤ 1 2 . Lemma 13. If v ′ ∈ V (Γ) with d(v, v ′ ) ≥ K then v ′ has valence exactly 2. Proof. Suppose for contradiction that v ′ has valence at least 3 and let K 0 = d(v ′ , v) ≥ K. Select w 1,0 , w 2,0 , . . . , w K,0 = w 0 ∈ V (Γ) such that d(w 0 , v) = d(w 0 , v ′ ) + d(v ′ , v) and v ′ , w 1,0 , w 2,0 , . . . , w K,0 = w 0 are the vertices through which the geodesic from v ′ to w 0 passes, listed in order. Thus d(v ′ , w 0 ) = K. Let K 1 = N 2 (K + K 0 + 9). As v ′ is of valence at least 3 we also have vertices w 1,1 , . . . , w K1,1 are the vertices through which the geodesic from v ′ to w 1 move; and w 1,1 = w 1,0 . These selections are possible because v ′ is of valence ≥ 3 and since all w i,j are of valence at least 2. Making such a finite number of choices, where the number of choices is already known, is still within the purview of ZF. Let v = v 0 , v 1 , v 2 , . . . , v K0 = v ′ be the vertices of the geodesic segment [v, v ′ ] listed in the order in which they are traversed on the geodesic from v to v ′ . Notice w 1,1 , w 2,1 , . . . , w K1,1 = w 1 where d(v, w 1 ) = d(v, v ′ ) + d(v ′ , w 1 ); v ′ ,d 0 (g(v ′ ), b) ≥ d 0 (g(v ′ ), g(v)) − d 0 (g(v), b) ≥ ( 1 N d(v ′ , v) − N ) − 3N ≥ K N − N − 3N = 3N and by the same token we have d 0 (g(w i,j ), b) ≥ 3N for all w i,j which we have defined. Then all g(w i,j ) and also g(v ′ ) are not in the base of Γ 0 , and we claim in fact that they must be in the same arm, say the X-arm. We prove this for g(w 0 ) and g(w 1 ) and the proof in all other cases is completely analogous. If g(w 0 ) and g(w 1 ) are in different arms then any path from g(w 0 ) to g(w 1 ) must pass through b. Then the path given by [g(w 0 ) = g(w K,0 ), g(w K−1,0 )] ∪ · · · ∪ [g(w 2,0 ), g(w 1,0 )] ∪ [g(w 1,0 ), g(v ′ )] ∪[g(v ′ ), g(w 1,1 )] ∪ · · · ∪ [g(w K1−1,1 ), g(w K1,1 ) = g(w 1 )] must pass through b, but each listed segment of the path is of length ≤ 2N and so this would bring either g(v ′ ) or some g(w i,j ) within distance N of b, which is a contradiction. We further note that d 0 (g(w 0 ), b) ≥ d 0 (g(w 0 ), g(v)) − d 0 (g(v), b) ≥ ( 2K N − N ) − 3N = 10N and d 0 (g(w 0 ), b) ≤ d 0 (g(w 0 ), g(v)) + d 0 (g(v), b) ≤ (N d(w 0 , v) + N ) + 3N = N (K + K 0 ) + 4N and d 0 (g(w 1 ), b) ≥ d 0 (g(w 1 ), g(v)) − d 0 (g(v), b) ≥ ( 1 N d(w 1 , v) − N ) − 3N > ( 1 N d(w 1 , v ′ ) − N ) − 3N ≥ N 2 (K+K0+9) N − 4N ≥ N (K + K 0 ) + 5N The point g(w 0 ) must come within distance 2 of any geodesic [g(v), g(w 1 )] since g(w 0 ) and g(w 1 ) lie on the X-arm with 10N ≤ Lev(g(w 0 )) ≤ Lev(g(w 1 )) − N and Lev(g(v)) ≤ 3N (the point g(v) needn't lie on the X-arm). Then by Lemma 7 we know that the path given by [g(v) = g(v 0 ), g(v 1 )]∪[g(v 1 ), g(v 2 )]∪· · ·∪[g(v K0−1 ), g(v K0 ) = g(v ′ )]∪[g(v ′ ), g(w 1,1) ]∪ · · · ∪ [g(w K1−1,1 ), g(w K1,1 ) = g(w 1 )] must pass within distance 4 of g(w 0 ). But then some g(v i ) or g(w j,1 ) must be within N + 4 of g(w 0 ), contradicting d 0 (g(w 0 ), g(w j,1 )), d 0 (g(w 0 ), g(v i )) ≥ K N − N = 6N for all 0 ≤ i ≤ K 0 and 1 ≤ j ≤ K 1 . Let W be the set of all vertices of distance exactly 2K from v. Lemma 14. For each w ∈ W the point g(w) is in an arm of Γ 0 . Also, for each X ∈ Z there is a unique w ∈ W with g(w) in the X-arm. Proof. If w ∈ W then d 0 (g(w), b) ≥ d 0 (g(w), g(v)) − d 0 (g(v), b) ≥ ( 2K N − N ) − 3N = 10N so that Lev(g(w)) ≥ 10N > 0 and g(w) is in an arm. Suppose that for distinct w 0 , w 1 ∈ W we have g(w 0 ) and g(w 1 ) in the Xarm. As w 0 and w 1 are distinct we know that d(w 0 , w 1 ) ≥ 2K since letting v ′ ∈ V (Γ) satisfy [v, w 0 ] ∩ [v, w 1 ] = [v, v ′ ], we have d(v, v ′ ) < K by Lemma 13. Then d 0 (g(w 0 ), g(w 1 )) ≥ 2K N − N = 10N . Let without loss of generality Lev(g(w 1 )) ≥ Lev(g(w 0 )) + 10N − 4. Let v = v 0 , v 1 , . . . , v 2K = w 1 be the vertices in [v, w 1 ] listed in increasing distance from v. As in the proof of Lemma 13 the path given by [g(v 0 ), g(v 1 )] ∪ · · · ∪ [g(v 2K−1 ), g(v 2K )] is distance at most 4 away from g(w 0 ), so some g(v i ) must be within N + 4 of g(w 0 ), contradicting d 0 (g(v i ), g(w 0 )) ≥ K N − N = 6N . Let X ∈ Z be given and select x ∈ Γ such that g(x) is in the X-arm and Lev(g(x)) ≥ 2KN + 2N 2 + 3N . Then v, x] be the vertex such that d(w, v) = 2K and since all vertices in the geodesic segment [w, x] are at least distance 2K from v we may argue as before that g(w) is also in the X-arm. d(x, v) ≥ 1 N d 0 (g(x), g(v)) − 1 > 1 N (d 0 (g(x), b) − d 0 (g(v), b)) − N ≥ 1 N (d 0 (g(x), b) − 3N ) − N ≥ 2KN +2N 2 N − N = 2K + N Let w ∈ [ To finish the proof of Proposition 11 we let h : W → Z be defined by h(w) = π 1 (g(w)) if g(w) ∈ V (Γ 0 ) π 1 (π 2 (E)) where g(w) / ∈ V (Γ 0 ) lies on the edge E where π 1 and π 2 are the projections to the first and second coordinates, respectively. Thus h(w) gives the first coordinate of g(w) provided g(w) ∈ V (Γ 0 ) and otherwise h gives the first coordinate of the label of the edge on which g(w) lies. Let A = h(W ). The check that A satisfies the conclusion of Proposition 11 is straightforward. The Proofs of the Main Results In this section we restate and prove Theorems 1, 4, and then 3. Theorem 1. SQHS implies the axiom of choice. Proof. Assume SQHS, let Z be a nonempty collection of disjoint nonempty sets, and let Γ 0 be as constructed in Section 2. We give a simplicial tree Γ 1 as follows: Let V (Γ 1 ) = (Z × (ω \ {0})) ∪ {B} where B / ∈ Z. Let (v 0 , v 1 ) ∈ E(Γ 1 ) if one of the following holds: (1) v i = B and v 1−i = (Y, 1) for some Y ∈ Z (2) v i = (Y, n) and v 1−i = (Y, m) for some Y ∈ Z and m = n ± 1 Let f : Γ 0 → Γ 1 be given in the following way. Map vertices to vertices by letting f (b) = B and f ((x, n)) = (X, n) where x ∈ X. Map all points on an edge between (x 0 , n) and (x 1 , n) to (X, n) (where x 0 , x 1 ∈ X) and map points on an edge between vertices of differing levels so that the restriction of f to this edge is an isometry. Notice that f is onto and d 0 (x, y) − 2 ≤ d 1 (f (x), f (y)) ≤ d 0 (x, y) where d 1 is the simplicial metric on Γ 1 . Since Γ 1 is 0-hyperbolic and Γ 0 is 2hyperbolic (Lemma 8) we have by SQHS a quasi-isometry g : Γ 1 → Γ 0 , which implies a selection for the collection Z by Proposition 11. Theorem 4. The Bottleneck Theorem implies the axiom of choice. Proof. Assume the Bottleneck Theorem, let Z be a nonempty collection of nonempty pairwise disjoint sets and let Γ 0 be as constructed in Section 2. By Lemma 9 we know Γ 0 satisfies condition (2) of the Bottleneck Theorem, so there exists a simplicial tree Γ 2 and quasi-isometry g : Γ 2 → Γ 0 . By Proposition 11 there is a selection on Z and we have proven the axiom of choice. Theorem 3. The symmetry of the quasi-isometry relation between R-trees follows from ZF. Proof. We suppose that f : (T, d) → (T ′ , d ′ ) is a quasi-isometry, with T an R-tree. Let N ∈ ω \ {0} be a corresponding quasi-isometry constant. We may assume that both T and T ′ are nonempty. Fix z ∈ T . We define a map h : (T ′ , d ′ ) → (T, d) by letting h(x) = y where [z, y] = y0∈f −1 (B(x,N )) [z, y 0 ]. We check (using only ZF) that the map h is well defined. We show that for every x ∈ T ′ there is a unique y ∈ T such that [z, y] = y0∈f −1 (B({x},N )) [z, y 0 ], and the axiom of replacement implies that such a map h exists. Let x ∈ T ′ be given. We know by assumption that B(x, N ) intersects the image of f nontrivially, so f −1 (B(x, N )) is nonempty. Select y ′ 0 ∈ f −1 (B(x, N )). Notice that if w ∈ y0∈f −1 (B(x,N )) [z, y 0 ] and w ′ ∈ [z, w] then w ′ ∈ y0∈f −1 (B(x,N )) [z, y 0 ] as well. Then y0∈f −1 (B(x,N )) [z, y 0 ] is a subset of [z, y ′ 0 ] which is closed under taking elements of geodesic segments from z. Moreover, y0∈f −1 (B(x,N )) [z, y 0 ] is compact as an itersection of closed subsets of the compact metric space [0, y ′ 0 ]. The set {t ∈ [0, d(z, y 0 )] : (∃w ∈ y0∈f −1 (B(x,N )) [z, y 0 ])d(z, w) = t} is isometric with the compact space y0∈f −1 (B(x,N )) [z, y 0 ] and so {t ∈ [0, d(z, y 0 )] : (∃w ∈ y0∈f −1 (B(x,N )) [z, y 0 ])d(z, w) = t} contains a supremum s, which corresponds under the isometry to a point y ∈ y0∈f −1 (B(x,N )) [z, y 0 ]. Then we know [z, y] ⊆ y0∈f −1 (B(x,N )) [z, y 0 ], and that y0∈f −1 (B(x,N )) [z, y 0 ] ⊆ [z, y] ⊆ [z, y ′ 0 ] is clear. That y is unique follows from the fact that any y ′ ∈ T is uniquely determined by the geodesic [z, y ′ ]. We now show (using only ZF) that h is a quasi-isometry and we will be done. We claim that M = 9N 2 is a quasi-isometry constant for h. Note first that for a given x ∈ T ′ that y 0 , y 1 ∈ f −1 (B(x, N )) implies that d ′ (f (y 0 ), f (y 1 )) ≤ 2N , so that 1 N d(y 0 , y 1 ) − N ≤ d ′ (f (y 0 ), f (y 1 )) ≤ 2N from which we obtain d(y 0 , y 1 ) ≤ 3N 2 . As T is a tree it follows that h(x) is distance at most 3N 2 from any y 0 ∈ f −1 (B(x, N )), for if y 0 , y 1 ∈ f −1 (B(x, N )), the equality [z, y 0 ] ∩ [z, y 1 ] = [z, w] implies d(w, y 0 ) ≤ 3N 2 . Then d(w, h(f (w))) ≤ 3N 2 < 9N 2 , so that B(h(T ′ ), 9N 2 ) = T . Let x 0 , x 1 ∈ T ′ and select y 0 , y 1 ∈ T such that d ′ (f (y 0 ), x 0 ), d ′ (f (y 1 ), x 1 ) ≤ N . Then as noted we have d(y 0 , h(x 0 )), d(y 1 , h(x 1 )) ≤ 3N 2 . Then 1 M d ′ (x 0 , x 1 ) − M ≤ 1 N d ′ (x 0 , x 1 ) − 9N 2 ≤ 1 N (d ′ (x 0 , x 1 ) − N − N ) − 7N 2 ≤ 1 N (d ′ (x 0 , x 1 ) − d ′ (x 0 , f (y 0 )) − d ′ (x 1 , f (y 1 ))) − 7N 2 ≤ 1 N d ′ (f (y 0 ), f (y 1 )) − N − 6N 2 ≤ d(y 0 , y 1 ) − 6N 2 ≤ d(y 0 , y 1 ) − d(h(x 0 ), y 0 ) − d(h(x 1 ), y 1 ) ≤ d(h(x 0 ), h(x 1 )) ≤ d(h(x 0 ), y 0 ) + d(h(x 1 ), y 1 ) + d(y 0 , y 1 ) ≤ 6N 2 + d(y 0 , y 1 ) ≤ 6N 2 + N d ′ (f (y 0 ), f (y 1 )) + N 2 ≤ 6N 2 + N (d ′ (x 0 , x 1 ) + d ′ (x 0 , f (y 0 )) + d ′ (x 1 , f (y 1 ))) + N 2 ≤ 9N 2 + N d ′ (x 0 , x 1 ) ≤ M d ′ (x 0 , x 1 ) + M which completes the proof. Theorem 4 . 4The Bottleneck Theorem implies the axiom of choice. Existence of bases implies the axiom of choice. A Blass, Contemporary Mathematics. 31Axiomatic set theoryA. Blass. Existence of bases implies the axiom of choice. Axiomatic set theory, Contemporary Mathematics 31: 31-33, 1984. Metric Spaces of Non-positive Curvature. M R Bridson, A Haefleger, Springer-VerlagM. R. Bridson, A. Haefleger. Metric Spaces of Non-positive Curvature, Springer- Verlag, 1999. The Independence of the Continuum Hypothesis I. P J Cohen, Proc. of the U. S. National Academy of Sciences. 50P. J. Cohen. The Independence of the Continuum Hypothesis I. Proc. of the U. S. National Academy of Sciences. 50 (1963), 1143-1148. Groups of polynomial growth and expanding maps. M Gromov, Inst. HautesÉtudes Sci. Publ. Math. 53M. Gromov. Groups of polynomial growth and expanding maps. Inst. HautesÉtudes Sci. Publ. Math. 53 (1981), 53-73. M Gromov, Hyperbolic groups. Essays in group theory. New YorkSpringer8M. Gromov. Hyperbolic groups. Essays in group theory, 75-263, Math. Sci. Res. Inst. Publ., 8, Springer, New York, 1987. The ordering theorem does not imply the axiom of choice. J D Halpern, A Levy, Notices of the AMS. 1156J. D. Halpern, A. Levy. The ordering theorem does not imply the axiom of choice, Notices of the AMS 11 (1964), 56. . W Hodges, Krull implies Zorn. J. London Math. Soc. 192W. Hodges. Krull implies Zorn. J. London Math. Soc. (2), 19 (1979), 285-287. Subgroups of a free group and the axiom of choice. P E Howard, J. of Symbolic Logic. 50P. E. Howard. Subgroups of a free group and the axiom of choice. J. of Symbolic Logic 50 (1985) 458-467. The Tychonoff product theorem implies the axiom of choice. J Kelley, Fund. Mathematica. 37J. Kelley. The Tychonoff product theorem implies the axiom of choice. Fund. Math- ematica 37 (1950) 75-76. Geometry of pseudocharacters. J F Manning, Geom. Topol. J. F. Manning. Geometry of pseudocharacters. Geom. Topol. (2005), 1147-1185. Introduction to Axiomatic Set Theory. G Takeuti, W M Zaring, Springer-VerlagG. Takeuti, W. M. Zaring. Introduction to Axiomatic Set Theory, Springer-Verlag, 1971. . M Samuel, Corson Mathematics Department 1326 Stevenson Center Vanderbilt University NashvilleTN 37240 USA [email protected] M. Corson Mathematics Department 1326 Stevenson Center Vanderbilt University Nashville, TN 37240 USA [email protected]
[]
[ "Acrylic Target Vessels for a High-Precision Measurement of θ 13 with the Daya Bay Antineutrino Detectors", "Acrylic Target Vessels for a High-Precision Measurement of θ 13 with the Daya Bay Antineutrino Detectors" ]
[ "H R Band \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "R Brown \nBrookhaven National Laboratory\nUptonNYUSA\n", "J Cherwinka \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "J Cao \nIntitute of High Energy Physics\n19 Yuquan RoadBeijingChina\n", "Y Chang \nNational United University Taiwan\nNo 1, Lien-Da RdMiao-LiTaiwan\n", "B Edwards \nLawrence Berkeley National Laboratory\nAddress, Cyclotron RdBerkeleyCAUSA\n", "W S He \nNational Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan\n", "K M Heeger \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "Y Heng \nIntitute of High Energy Physics\n19 Yuquan RoadBeijingChina\n", "T H Ho \nNational Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan\n", "Y B Hsiung \nNational Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan\n", "L Greenler \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "S H Kettell \nBrookhaven National Laboratory\nUptonNYUSA\n", "C A Lewis \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "K B Luk \nLawrence Berkeley National Laboratory\nAddress, Cyclotron RdBerkeleyCAUSA\n", "X Li \nIntitute of High Energy Physics\n19 Yuquan RoadBeijingChina\n", "B R Littlejohn [email protected] \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "A Pagac \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "C H Wang \nNational Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan\n", "W Wang \nCollege of William and Mary\n116 Jamestown Rd. WilliamsburgVirginia\n", "Y Wang \nIntitute of High Energy Physics\n19 Yuquan RoadBeijingChina\n", "T Wise \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "Q Xiao \nUniversity of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin\n", "M Yeh \nBrookhaven National Laboratory\nUptonNYUSA\n", "H Zhuang \nIntitute of High Energy Physics\n19 Yuquan RoadBeijingChina\n" ]
[ "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "Brookhaven National Laboratory\nUptonNYUSA", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "Intitute of High Energy Physics\n19 Yuquan RoadBeijingChina", "National United University Taiwan\nNo 1, Lien-Da RdMiao-LiTaiwan", "Lawrence Berkeley National Laboratory\nAddress, Cyclotron RdBerkeleyCAUSA", "National Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "Intitute of High Energy Physics\n19 Yuquan RoadBeijingChina", "National Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan", "National Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "Brookhaven National Laboratory\nUptonNYUSA", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "Lawrence Berkeley National Laboratory\nAddress, Cyclotron RdBerkeleyCAUSA", "Intitute of High Energy Physics\n19 Yuquan RoadBeijingChina", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "National Taiwan University\nNo 1, Sec 4, Roosevelt RdTaipeiTaiwan", "College of William and Mary\n116 Jamestown Rd. WilliamsburgVirginia", "Intitute of High Energy Physics\n19 Yuquan RoadBeijingChina", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "University of Wisconsin\n1150 University AvenueMadison, MadisonWisconsin", "Brookhaven National Laboratory\nUptonNYUSA", "Intitute of High Energy Physics\n19 Yuquan RoadBeijingChina" ]
[]
This paper describes in detail the acrylic vessels used to encapsulate the target and gamma catcher regions in the Daya Bay experiment's first pair of antineutrino detectors. We give an overview of the design, fabrication, shipping, and installation of the acrylic vessels and their liquid overflow tanks. The acrylic quality assurance program and vessel characterization, which measures all geometric, optical, and material properties relevant to ν e detection at Daya Bay are summarized. This paper is the technical reference for the Daya Bay acrylic vessels and can provide guidance in the design and use of acrylic components in future neutrino or dark matter experiments.
10.1088/1748-0221/7/06/p06004
[ "https://arxiv.org/pdf/1202.2000v2.pdf" ]
119,303,023
1202.2000
e456bc0d577bab9549a860f2dbd0ece85e0c71cd
Acrylic Target Vessels for a High-Precision Measurement of θ 13 with the Daya Bay Antineutrino Detectors 10 Feb 2012 H R Band University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin R Brown Brookhaven National Laboratory UptonNYUSA J Cherwinka University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin J Cao Intitute of High Energy Physics 19 Yuquan RoadBeijingChina Y Chang National United University Taiwan No 1, Lien-Da RdMiao-LiTaiwan B Edwards Lawrence Berkeley National Laboratory Address, Cyclotron RdBerkeleyCAUSA W S He National Taiwan University No 1, Sec 4, Roosevelt RdTaipeiTaiwan K M Heeger University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin Y Heng Intitute of High Energy Physics 19 Yuquan RoadBeijingChina T H Ho National Taiwan University No 1, Sec 4, Roosevelt RdTaipeiTaiwan Y B Hsiung National Taiwan University No 1, Sec 4, Roosevelt RdTaipeiTaiwan L Greenler University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin S H Kettell Brookhaven National Laboratory UptonNYUSA C A Lewis University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin K B Luk Lawrence Berkeley National Laboratory Address, Cyclotron RdBerkeleyCAUSA X Li Intitute of High Energy Physics 19 Yuquan RoadBeijingChina B R Littlejohn [email protected] University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin A Pagac University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin C H Wang National Taiwan University No 1, Sec 4, Roosevelt RdTaipeiTaiwan W Wang College of William and Mary 116 Jamestown Rd. WilliamsburgVirginia Y Wang Intitute of High Energy Physics 19 Yuquan RoadBeijingChina T Wise University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin Q Xiao University of Wisconsin 1150 University AvenueMadison, MadisonWisconsin M Yeh Brookhaven National Laboratory UptonNYUSA H Zhuang Intitute of High Energy Physics 19 Yuquan RoadBeijingChina Acrylic Target Vessels for a High-Precision Measurement of θ 13 with the Daya Bay Antineutrino Detectors 10 Feb 2012Preprint typeset in JINST style -HYPER VERSIONDaya Bayacrylicquality assuranceoptical propertiesmechanical propertiescompatibilityradioactivity * Corresponding author This paper describes in detail the acrylic vessels used to encapsulate the target and gamma catcher regions in the Daya Bay experiment's first pair of antineutrino detectors. We give an overview of the design, fabrication, shipping, and installation of the acrylic vessels and their liquid overflow tanks. The acrylic quality assurance program and vessel characterization, which measures all geometric, optical, and material properties relevant to ν e detection at Daya Bay are summarized. This paper is the technical reference for the Daya Bay acrylic vessels and can provide guidance in the design and use of acrylic components in future neutrino or dark matter experiments. Introduction The Daya Bay reactor neutrino experiment is designed to measure the neutrino mixing parameter sin 2 2θ 13 to better than 0.01 at 90% confidence level via the observation of reactor-ν e disappearance [1]. The experiment consists of eight identical ν e detectors (ADs) placed at three underground experimental halls at near and far baselines from six nuclear reactor cores in a configuration as shown in Figure 1. The Daya Bay experiment will be one of the first reactor neutrino experiments to measure relative antineutrino rates and spectra with near and far detectors at km-order baselines. An antineutrino detector (AD), visible in Figure 2, consists of three concentric liquid zones separated by two ultraviolet-transmitting acrylic vessels. The innermost zone, composed of Gddoped liquid scintillator (Gd-LS), is the antineutrino target. The middle region, filled with undoped liquid scintillator (LS), is a gamma catcher that absorbs energy from events on the edge of the target region, precluding the need for a fiducial volume cut. The outermost liquid region, filled with mineral oil (MO), is a buffer between the photomultiplier tubes (PMTs) around the outside edge of the detector and the two central regions. This region helps reduce radioactive backgrounds and improve detector energy resolution. ν e interacting via inverse-beta decay provide a clear signal by depositing a prompt energy pulse from positron energy deposition and annihilation in either of the two inner liquid scintillator volumes and a delayed 8 MeV energy pulse from neutron capture on Gd in the target volume. The ADs are assembled in pairs to allow for a relative comparison between detectors that are as identical as possible. The first two ADs will be deployed and run in the Daya Bay near site experimental hall while the rest of the experiment's construction in completed. This near site data will be used to check the performance and response of the two 'identical' ADs and examine detector and reactor systematics. As the detector systematic uncertainty is the dominant systematic in the measurement of a near-far ν e flux ratio, it must be kept well below the percent level if a percent-level relative disap- pearance is to be measured. Estimates of detector-related systematics for the Daya Bay experiment are shown in Table 1. The largest of all detector systematics are those related to the number of target protons and energy cuts done to extract signal from background. An important task in achieving these stated systematics goals is understanding and characterizing the acrylic vessels that determine many of the properties of the detector target volume. Source of uncertainty Absolute uncertainty Relative Uncertainty (%) for one AD (%) Baseline Goal This paper focuses specifically on the Daya Bay acrylic vessels (AVs) and their related overflow tank systems, discussing design, fabrication, transport, installation, and ongoing characterization. The properties and history of the acrylic vessels must be particularly well-known because of the AVs' impact on the largest detector systematics: the acrylic's proximity to the target volume and effect on light transmission effects detection efficiency, while the vessels' shapes determine the size and shape of the AD's target. In addition, reliable AVs are necessary to ensure the proper functionality of the experiment over the minimum designed experimental lifetime of 5 years. The AV overflow tank system, which holds excesses of the liquids inside the AVs, is crucial in monitoring the mass of the liquids inside the AD to <0.1%, and thus must also be thoroughly characterized. Table 2. Fabrication and assembly timelines of the inner and outer acrylic vessels and overflow tank system for the first two ADs. Fabrication, construction, and characterization of acrylic vessels for the first two ADs took place between late 2008 and the middle of 2010. The timeline of the AV and overflow tank life cycles can be seen in Table 2. Design and R&D began in 2006. Vessel and Overflow Tank Design The Daya Bay acrylic target vessel system is composed of three main elements: the inner acrylic vessel (IAV), the outer acrylic vessel (OAV), and the overflow tank system. A complete acrylic vessel (AV) system is pictured in Figure 3. The IAV contains the Gd-LS volume, and is visible in Figure 3 as the inner nested cylinder. The OAV holds LS, and is visible in Figure 3 as the outer cylinder. The overflow tank system rests on top of the stainless steel vessel (SSV) that encapsulates the AVs, PMTs, reflectors, and AD liquids. This subsystem contains extra separated LS and Gd-LS volumes to ensure that both AVs are completely filled at all times during running despite environmental temperature and pressure changes. The AV structures must satisfy a wide range of design requirements. The main purpose of the the AVs is to maintain the target and gamma-catcher liquid regions as separate nested volumes of well-defined size. Thus, the design of the OAV must accommodate the nesting of the IAV within it. In addition, the vessels must be structurally sound while filled or unfilled and leak-tight. The vessels must always be completely full, with any excess liquid volume closely monitored in overflow tanks. There must also be a leak-tight liquid pathway between the vessels and the overflow tank system. To maintain design simplicity, these pathways should also accommodate the processes of filling and calibrating the liquid regions inside the vessels. The position of the two vessels with respect to one another should also be consistent to maintain these pathways and the liquid regions' shapes. The proper physics response of the AD places other design constraints on the vessels. They must be sufficiently optically clear such that anti-neutrino signals from the center regions can be detected by PMTs in the outer regions of the detectors. They must also have low radioactivity so as not to contribute significant background to the experiment. Further design requirements are imposed by the shipping and assembly processes. The vessels' design must include a method of being lifted during transportation and installation. They should also be designed to maintain their shape and integrity during and after experiencing stresses during these processes. The design should additionally allow for all parts to fit together properly in the presence of minor misalignments or dimensional flaws. Finally, the vessels must be designed to ensure that the AD will function properly for the entire planned length of the experiment. This means that the materials used for the acrylic vessels and overflow tanks must be compatible with LS, Gd-LS, and mineral oil over long time periods. The materials compatibility issues are also important for the stability of the Gd-LS. Extensive testing and special care has been taken to ensure that the wetted materials that come in contact with the detector liquids meet these requirements. Figure 3 provides an overview of the acrylic vessel design. The IAV and OAV are composed of a main cylindrical section approximately 3 m and 4 m in diameter, and 3 m and 4 m in height, respectively, and a conical top of 3% gradient. The walls are kept thin, 10 mm for the IAV and 18 mm for the OAV, to reduce the amount of non-scintillating absorptive material in the inner detector. The vessels encapsulate a volume of 23 tons of Gd-LS in the target and 25 tons of LS in the gamma-catcher region. The vessels are manufactured from ultraviolet-transmitting (UVT) acrylic, which allows for maximal transmission of near-UV and optical scintillation light emitted by the AD liquids. Acrylic is the vessel material of choice because it is stronger, cheaper, lower in radioactivity, and easier to fabricate than most other optical window materials. Fabrication and construction of large acrylic objects is a common practice: acrylic is the material of choice for commercial aquariums and other applications, and was also used to build the 12 m diameter sphere inside the Sudbury Neutrino Observatory [2]. Vessel Design The nested design of the inner and outer acrylic vessels in Daya Bay poses an interesting challenge. The OAV is manufactured in the US and shipped to Daya Bay, while the IAV is fabricated in Taiwan. The vessels are assembled together into the AD in the Surface Assembly Building (SAB) at Daya Bay. To accommodate this fabrication plan, the conical top of the OAV has been designed as a detachable lid that connects to the rest of the vessel via a double o-ring seal to a 4 mdiameter flange on top of the OAV walls. A cross-section of this OAV flange-lid connection can be seen in Figure 4, along with a photo of a production OAV flange seal and accompanying leak check port plug. The o-rings are compressed by stainless bolts torqued into nitrogen-allowed nuts at 60 foot-pounds. The bolts are separated from the acrylic by viton washers and teflon bolt-hole sleeves. to the rest of the OAV vessel. Also pictured in the photograph is the OAV leak-check port and the plug used to seal the leak-check port and the space between o-ring grooves. A teflon-covered stainless wire is used to hold the port plug in place. The top of the IAV and the OAV lid have two and three ports, respectively, which will serve as the entry point for liquids, and entry and exit points for calibration sources. A close-up view of the off-center calibration ports can be seen in Figure 5. The off-center ports accommodate delivery of LEDs and radioactive sources during calibration and liquids during filling, while the central port (a) Cutaway of the connection between the IAV calibration port and the IAV calibration box. (b) Cutaway of the connection between the OAV calibration port and the OAV calibration box. allows delivery of calibration sources and allows the flow of liquids between the AVs and the LS and Gd-LS overflow tanks. The IAV ports are connected to semi-flexible teflon bellows that run up to connection hardware on the stainless steel tank lid. While traversing the OAV region, the IAV teflon bellow runs inside a larger-diameter teflon bellow that connects the OAV ports to a related set of connection hardware on the stainless steel vessel lid. The calibration sources are housed in and lowered from slightly elevated automated calibration units directly above the stainless steel vessel connection hardware. All the connecting parts are made of acrylic, teflon, or viton, and utilize single or double o-ring seals to prevent liquid leakage. To allow for proper connection despite minor radial and height misalignments, the calibration tubes are capable of sliding up and down with respect to other parts while maintaining leak-tightness. The flexibility of the teflon bellows also helps assure connection in spite of slight misalignments. The bottoms of the AVs provide structural support, facilitate the alignment of the acrylic vessels, and make a mechanical link between the inner and outer vessels through the hold-down mechanisms. 5 cm-thick supporting ribs can be seen in Figure 3 on the bottom outside of the IAV and the bottom inside of the OAV, as well as on the outside of the IAV and OAV lids. The ribs provide a support structure that allows the vessels to withstand stresses experienced during lifting, transport, and filling. Finite element analysis (FEA) was carried out on the vessel designs to determine the necessary wall thickness and structural support. The highest stress risk occurs during the filling process if liquid levels become uneven: Figure 6 shows that expected stresses for a 30 cm level difference inside and outside the OAV are as high as 10 MPa. To avoid these stresses, liquid levels will be matched to 5 cm during filling. The long-term design stress limits for the acrylic vessels is 5 MPa. Short-term stress exposures up to 10 MPa are considered safe. The filling process, which takes 4 days to complete, is considered a short duration compared to the nominal 5-10 year lifetime of the acrylic vessels. Stress characterization tests that validate these tolerances are discussed in Section 6.1.4. (a) Results of FEA calculation. The small region of maximum stress at 9.5 MPa is labelled "MX". Another highstress region can be seen at the bottom middle of the OAV. A few other design features located at the bottom of the vessels can be seen in Figure 7. Lifting hooks on the IAV and OAV can be seen in Figure 7(c); four hooks are located on the outside of each AV, one every 90 • . Rigging and lifting of the vessels using these hooks will be discussed further in Section 5. Figure 7 also demonstrates the hold-down mechanism which latches the IAV to the OAV. The IAV is lowered into the vessel and four of its ribs are rested on four teflon pucks that rest on the ribs of the OAV. A latching mechanism connected to the OAV is then rotated into place over a hook on the end of the IAV rib. This mechanism is designed to constrain the IAV's movement upward. The latch can only be disengaged by applying upward force to the bottom of the latch, minimizing the risk of accidental disengagement from vibration during transport. The rib stops atop the OAV ribs beyond the IAV rib ends constrain the IAV's movement in the radial direction. One set of rotational stops attached to either side of one OAV rib, along with the rib stops, constrain motion about the axis of the vessel. Table 3 lists all the materials present in the AVs, overflow tanks, and connection hardware, and overviews the level of compatibility between these materials and the AD liquids. Materials are non-compatible if one causes the mechanical or optical degradation of the other when the two are in contact. Further documentation of compatibility QA measurements are detailed in Section 6.1.3. The vessel and port connections only use materials compatible with all AD liquids. (a) A close-up of the lateral alignment guide, support puck, IAV hold-down mechanism, rib stop, and lifting hook without an IAV present. (b) A close-up of the as-built components pictured to the right. Not present in this picture is the support puck and IAV hold-down latch. (c) A close-up of an engaged IAV hold-down mechanism. IAV and OAV lifting hooks are also visible, along with the engaged lateral alignment guide. Figure 7. Close-up views of the IAV/OAV interface. The IAV rests on the teflon puck (blue), which is shimmed beforehand to ensure IAV levelness, The latch (brown) is rotated to secure the IAV with respect to the OAV. The lateral alignment guide directs azimuthal placement while the IAV is being lowered onto the shimmed puck. Material Compatible With: Table 3. Material compatibility between AV and overflow system construction materials and AD liquids. LS Gd-LS MO Acrylic Y Y Y Teflon Y Y Y Viton Y Y Y Nitrogen Gas Y Y Y Stainless Steel N N Y Overflow Tank Design Overflow tanks are connected to the central detector volumes for Gd-LS, LS, and MO to allow for the thermal expansion of the detector liquids during filling, transport, and storage. A close-up view of the overflow tanks can be seen in Figure 8. The overflow tanks consists of two separated nested spaces bounded by acrylic. The innermost region, the Gd-LS overflow tank, is 1.3 m wide and 13 cm deep, and is surrounded by an LS overflow tank of 1.8 m diameter and 13 cm depth. This corresponds to an overflow liquid volume of 167 liters Gd-LS and 151 liters LS. These overflow spaces are created by three large acrylic structures, which can be see in Figure 8. The first is a 0.9 m-radius flat-bottom cylindrical tank with no top that serves as the base of the LS overflow region. The next is a 0.65 m-radius cylinder with an extra elevated flared ring of material on its outside top that rests on top of the outer cylinder. Raised areas on the underside of this piece allow it to rest on the bottom of the outermost cylinder, while still allowing liquid to pass below it inside the outermost cylinder. This second piece is the base of the Gd-LS overflow region and the flared covers the LS overflow region on top. The third piece is a lid for the Gd-LS overflow region. Polyurethane is applied to the interfaces between these three pieces to provide a seal between the overflow tank volumes and the outer cover gas region of the detectors. A stainless steel ring with spokes, or spider ring, is fastened down on top of these acrylic pieces to hold them together without the need of screws in acrylic. Finally, a stainless steel shell surrounding the apparatus provides an air-and liquid-tight seal to enclose the nitrogen gas and AD liquids in the overflow tanks and separate them from the water outside the detector. The AD is filled with liquid by pumping Gd-LS and LS in through the off-center IAV and OAV ports, respectively, displacing nitrogen gas that is circulated through the AD before filling. The liquid fills up through the volume of the detector, then through the bellows and into the overflow tanks. Filling was stopped with the liquid levels at 4.6 cm and 6.4 cm above the bottom of the Gd-LS and LS overflow tanks, respectively. An approximate 3 • C rise in temperature and subsequent expansion of the AD liquids would cause the complete filling the LS and Gd-LS overflow tanks. In addition, if the temperature is lowered by 2 • C, the liquid will contract out of the overflow tank and begin emptying the target volume, causing unknown changes in optical properties at the top of the detector. Because of this, a 2 • C temperature change is the operational limit that can be experienced The locations of LS and Gd-LS liquid levels in the AVs and overflow tanks for a filled AD are illustrated in Figure 9. During detector operation, the liquid level is monitored in the overflow tanks by ultrasonic and capacitance sensors and in the off-center calibration ports with cameras to determine changes in the target and gamma-catcher mass resulting from changes in temperature and density the liquids. Fabrication OAV Fabrication OAVs are manufactured at Reynolds Polymer Technology, Inc., in Grand Junction, Colorado, USA [3]. The walls of the OAVs are constructed from 16 sheets of Polycast UVT acrylic [4]. The bottom, lid, and flange are cut from two large, thick blocks of Reynolds-cast UVT acrylic. The bottom ribs are made from 50 mm-thick sheets of Reynolds UVT acrylic and a central hub piece machined from a block of Reynolds UVT acrylic. The vessel elements were bonded using a proprietary UVT bonding syrup from Reynolds; the bond material appears in approximately 1/8" wide regions between bonded sheets. A drawing showing the placement of acrylic sheets and bond lines on an OAV can be seen in Figure 10. OAV fabrication was preceded by the construction of a prototype OAV at the same facility, as well as a round of prototype acrylic optical testing to determine an acceptable acrylic for the production OAVs. Radioactivity, compatibility, stress, and optical testing results for a range of acrylics are presented Section 6.1. The cylindrical OAV walls are composed of two 4 m-diameter acrylic cylinders, each created by bending and bonding together eight acrylic sheets. The OAV top, bottom, and flange are machined out from two thick blocks of bonded UVT acrylic. Rib sections and a central hub are bonded to the bottom, dividing it into octants. Additional acrylic pieces for the IAV support structure and hold-down mechanisms are bonded to four of the ribs. Annealing is done on the individual lid, wall, and bottom components to cure bonding material and to clear residual stresses introduced during fabrication. Care was taken to fully support the shape of acrylic components during annealing to avoid permanent deformation from components becoming more plastic at high annealing temperatures and sagging under their own weight. Such sagging was experienced during annealing of the OAV lid prototype. Annealing of subsequent OAV lids was done on a conical frame, while annealing of the OAV bottoms was done on precision-cut flat surfaces to ensure flatness of the OAV base and flange. Bonding of OAV components is done using proprietary bonding syrup mixed by Reynods. After setting dams along the intended bond against the joining acrylic pieces, syrup is injected between the pieces and cured using a proprietary heating regimen. The OAV flange and bottom structure are bonded to the top and bottom cylinders, respectively. The top and bottom cylinders are then bonded together to complete the vessel construction. A final anneal is done after bonding to clear residual stresses introduced during bonding. Vessel surfaces and bond lines are sanded and polished after bonding to remove excess syrup and discontinuities and to improve the clarity of the vessel surface. The OAV and IAV are polished down to 1 to 3 micron grit using aluminum oxide polishing powder, buffing wheels and water. A fully fabricated OAV's top and bottom can be seen in Figure 11. Notice that all structural support is located on the IAV outside. Figure 11. A comparison of the tops and bottoms of production OAVs and IAVs. IAV Fabrication IAVs were constructed at Nakano International Limited, in Taipei, Taiwan [5]. Nakano used PoSiang UVT acrylic for all parts: six 10-mm thick sheets for the walls, four 15-mm thick sheets for the top lid, four 15-mm sheets for the bottom lid, two 55-mm sheets for the ribs, and two UVT acrylic calibration ports. Sheets are joined together by approximately 1/8" wide bonding regions made out of a cured UVT syrup. A drawing showing the placement of acrylic sheets and bond lines on an IAV can be seen in Figure 12. As with the OAVs, quality assurance measurements are done on candidate IAV materials to ensure the suitability of the production acrylic for the Daya Bay detector. Details of these measurements are discussed in Section 6. In addition, a prototype IAV was fabricated to test the construction processes and vessel design. Vessel construction begins by forming the six wall sheets and then bonding them together using the UVT syrup and a system of dams to keep the syrup in place while the bonds undergo UV-curing. During this process, the top and bottom of the vessels are constructed separately in the same manner, with the top being baked on a mold to achieve a conical shape. In addition, ribs are bonded onto the IAV bottom and top, and center and off-center calibration ports are bonded to the IAV top. Following construction, each separate component's surfaces are polished in a similar manner as described in the OAV construction. After polishing, the top is bonded to the walls. The inside of the vessel must then be cleaned before the final bond is made, as after this point no further access to the inside of the IAV exists. The cleaning of the IAV will be described in greater detail in Section 5.1. After cleaning, the bottom is bonded to the rest of the vessel, completing construction. The IAV bottom edge must be properly shaped to encourage excess bonding syrup to flow out of the vessel during curing rather than into the vessel where it cannot be removed or polished out. A picture of a fabricated IAV's top and bottom is visible in Figure 11. In Figure 11 one can identify contrasting features on the IAV and OAV top and bottom . On the top, the major differences are the presence of the extra OAV flange seal and an extra calibration port. On the bottoms, the major difference is that the OAV's support structure is inside the OAV, while the IAV's support structure is on the outside of the IAV. Overflow Tank Fabrication The three main acrylic components for the overflow tanks are the LS overflow tank, the Gd-LS overflow tank, and the tank lid. The acrylic pieces that comprise each of these three components are fabricated by subcontractors of Reynolds by cutting and forming sheets of UV-absorbing (UVA) acrylic into designed shapes. The component sub-pieces are then sent back to Reynolds to be bonded into the three separate overflow components using Weld-on 40 acrylic cement. Other smaller acrylic parts from the overflow tank system and connection hardware are also manufactured by Reynolds' subcontractors. Transport and Shipping After fabrication, the IAVs and OAVs are transported from their respective locations to the Surface Assembly Building (SAB) at Daya Bay, where they are assembled into the antineutrino detectors. The OAVs are manufactured at Reynolds Polymer Technology in Grand Junction, CO and are transported by truck to Long Beach, CA, by ship to Yantian Port, and finally by truck to Daya Bay. The IAVs must be trucked from the Nakano factory in Taipei to port, and then shipped to Yantian, where they are then brought by truck. The entire shipping route of the vessels can be seen in Figure 13. Shipping Crates For transport, the vessels are packed into shipping frames and containers that support the structure of the AVs and minimize their exposure to light and environmental conditions. The OAV shipping frame can be seen in Figure 14. The sealed OAV rests on a steel square base and is secured by fastening hold-down pieces over all four OAV lifting hooks. A steel A-frame structure rises on each side of the base. A steel OAV lid support structure is lowered onto the top middle of the four A-frames and is attached to ears located there. A plug in the central OAV port is screwed upward into the center of the lid support structure, providing extra structural support for the lid during shocks and preventing resonant vibrations in the lid during shipping. The OAV is protected from impacts by casing made of polystyrene and kiln-treated wood and plywood. Underneath this casing are layers of self-adhesive plastic film and Coroplast [6] to preserve the surface quality of the OAV. To ensure that the inside of the vessel remains somewhat isolated from the surrounding environment, a hose and HEPA filter is connected to both off-center ports. The IAV shipping container can be seen in Figure 15. The container consists of six steelframed panels lined on the inside with wood panels. These panels are bolted together to form the cubical shipping crate. Foot-thick styrofoam panels are placed inside these outer panels to support the IAV on all sides and to absorb shocks during shipping. Lid support is less crucial for the IAVs, as they are smaller in diameter that the OAVs. To protect its surfaces, the IAV is coated in plastic wrap and then covered with black plastic sheeting. The IAV ports are covered with flanges containing HEPA filters to ensure that the IAV insides retain their cleanliness. Shipment Monitoring Several devices are used to monitor OAV location, acceleration, temperature, humidity, pressure and light exposure during shipment. Customs issues in shipping prevented the application of the same monitoring regimen for the IAVs; as a result, no shipping data is available for IAV1 and IAV2. For the OAVs, acceleration data is collected every 9 seconds by MSR 145W dataloggers [7] mounted on the base of the shipping frame and the OAV lid. Additionally, the MSRs on the base recorded temperature and pressure readings every 90 seconds. The lid-mounted MSR had Figure 15. Photograph of a partially-unpacked IAV in its container. Some of the styrofoam packing has been removed, but the IAV remains in its black plastic casing with styrofoam in front and behind it. a pressure sensor on an external cable, which allowed measurement of the pressure inside the OAV. The largest pressure changes correspond to changes in altitude as the OAVs are trucked from ∼1400 m to sea level. Trackstick Pro GPS data loggers are also mounted on the shipping frame to provide location information while the OAVs are in the United States [8]. To measure shocks, such as would occur if the AV is dropped, Shock Timer 3d loggers are mounted along with the MSRs on the shipping frame and OAV lid [9]. These sensors recorded the date and time of all shocks over 3 g and took additional temperature and humidity readings every hour. Finally, a pair of HOBO Pendant Dataloggers are mounted from a threaded rod suspended inside each AV to monitor light levels [10]. These are expected to be low as the OAVs are shipped with opaque coverings to shield them from UV. The first pair of OAVs left CO on June 25, 2009 and arrived at Daya Bay on August 5 and August 6. Just outside of Grand Junction, metal lifting tabs attached to the top of the OAV lid support structure were bent down by impact with a low bridge; this impact was recorded by the shock timers. The truck was returned to Reynolds to ensure that the vessels were not damaged. Other shocks greater than 3 g were recorded 10 times on the base sensors on each side of the OAVs, although most shocks did not correlate in time between sensors. The few sensor-correlated shocks seem to correspond to periods of loading or unloading. Periodic vertical acceleration measurements are shown in Figure 16; during the trip, typical accelerations were between 0.2 g and 0.4 g in excess of gravity. The changes in acceleration patterns give a picture of the status of the vessel, whether it was moving on the road, at rest, moving on a ship, or being loaded and unloaded. The temperatures over time recorded by one sensor on the OAV shipping frame base is shown in Figure 17(a). Periods of rapid temperature fluctuation correspond to time spent outside on the road or in port, while the smoothly varying period corresponds to time spent on the ocean. The temperature sensor recorded variations between 8 • C and 48 • C, with a maximum of 29 • C change Figure 16. Periodic acceleration measurements, measured in g. The measurement period begins in Grand Junction, Colorado and ends at port in Long Beach, California. The gray-shaded periods of low acceleration correspond to times when the OAVs and transport truck were at rest. (a) Temperature measurements during OAV transport. The figure is split into five periods: by truck in USA (red), at port in Long Beach (yellow), at sea (green), at port in Yantian (gray), and en route to and in the SAB (purple). (b) Pressure measurements during OAV trucking transport in the US. Pressure changes are due to altitude variations along the trucking route shown in Figure 15. The gray-shaded periods correspond to times when the OAVs were at rest. This 20 • C temperature fluctuation, if also experienced by the OAV, corresponds to a 4 mm variation in radius of the OAV from thermal expansion and contraction. It was not clear from the available temperature data how much temperature fluctuation, and thus acrylic expansion and contraction, varied over the surface of the OAV. Excessive or uneven expansion and contraction of the OAV is a concern as it is not known if such behavior will affect the mechanical strength of the OAV appreciably. Temperature tests have been conducted that ensure acrylic and bond optical and mechanical stability in the presence of these environmental fluctuations. As an added precaution, future OAVs were shipped with extra insulation to flatten out these temperature fluctuations. The recorded pressure history during the driving portion of the trip is shown in Figure 17(b); the rise in pressure at the end of the trip is the result of the transport truck coming down to sea level. Differences in pressure inside and outside an OAV were never greater than 1 mbar. The light sensor data did not show any unexpected periods of brightness. In addition, a light-sensitive acrylic monitor sample mounted by the OAV1 lid sensors did not show any signs of optical degradation, as would be expected if it had experienced any moderate exposure to sunlight. Assembly Cleanliness and Cleaning When they reach Daya Bay, acrylic vessels must be cleaned to ensure that the amount of radioactive contaminating material inside the detector's target region is minimized. In addition, cleaning improves the optical quality of the IAV and OAV surfaces and removes contaminants that are incompatible with the AD liquids. The quantitative cleanliness goal is to ensure that the radioactivity rate from contaminants in the IAV and OAV is less than 10% of the expected radioactivity from the GdLS liquid. This corresponds to a contamination radioactivity of 1 Hz. The OAVs and IAVs were cleaned at the times listed in Table 2. Cleaning of the IAV insides was done over the course of two days at Nakano. Cleaning of the IAV outsides was done in the course of a day per IAV at the Daya Bay SAB. OAV cleaning took 4-5 days per vessel at the SAB. Cleaning of the OAVs is done in the SAB semi-cleanroom, a climate-controlled, low-particulate environment. The top of the vessel is removed, and the top flange and lid are then spot-scrubbed with Al 2 O 3 powder, water, and a microwipe cloth to remove salt water residue, rust, and plastic wrap residue. The entire lid is washed on top and bottom with a 1% Alconox solution and fresh microwipes and rinsed with a pressure washer using 10 MΩ water. The conductivity of the rinse water is measured with a conductivity meter. When the rinse water conductivity matches that of the water directly from the pressure washer, rinsing is considered complete. The lid is then dried with microwipe towels and covered with a clean tarp. This same process is applied to the inside and outside of the OAV. After cleaning the vessel inside and walls, the lid is then reconnected and the vessel is lifted onto blocks so that the bottom can be spot-scrubbed and cleaned. A few pictures of the cleaning process can be seen in Figure 18. (a) Scrubbing the OAV inside with 1% Alconox solution. (b) Testing rinse water conductivity with a conductivity meter. All conductivity meters used were accurate to 0.1 µS. Figure 18. Photographs taken during IAV and OAV cleaning in the SAB at Daya Bay. The same general procedure is applied during the cleaning of the outside of the IAV and the acrylic overflow tank parts in the SAB semi-cleanroom. The inside of the IAV is also cleaned in the same manner, but in a class 10,000 clean tent at the Nakano factory before the vessel is completely bonded together. AV tube connection hardware parts are cleaned using an ultrasonic cleaner first in a 1% Alconox solution, and then in water. Leak-Checking Vessels are leak-checked to ensure that the target volume remains well-known to better than 0.1% after five years of detector operation. In addition, the leaking of non-scintillating liquids like water or mineral oil into the scintillating regions will reduce the detector's light yield. The leak requirements for the inner volumes are listed in Table 5; if these requirements are met, the uncertainty in target mass resulting from leakage will be <0.013%. Leak rates of argon in cc/sec at a given pressure differential are calculated from the 5-year acceptable leakages to serve as specifications for the leak-checking process. While the entire AD is surrounded by water, no single direct water-to-LS junction exists. Because of this, a water-to-LS leak rate specification is not applied. Each AV pair has numerous seals in four main locations: on the OAV lid, on the IAV calibration/overflow tube stacks, on the OAV calibration/overflow tube stacks, and near the stainless steel lid connections. Some leak-testing procedures are done at Reynolds, and some are done on-site at the Daya Bay SAB using varied procedures that depend on the type and location of the seal. Double o-ring seals are used facilitate high-precision leak checking of the critical OAV flange connection. This seal can be checked by pressurizing the region between the o-ring seals with argon using a special gas input port, and then monitoring any change in pressure over time. The Table 5. Leak rate specifications for the central liquid volumes, given for argon and for the AD liquids. As the leak rate is dependent on the pressure differential between zones, a conservative estimate of the zone pressure differential is also given. maximum leak rate on the main seal of OAV1 and OAV2 was 4.4×10 −2 cc/sec at a pressure of around 5 psi. This is well below 0.1 cc/sec, the maximum acceptable argon leak rate from MO to LS given in Table 5, corrected to a pressure of 5 psi. The OAV port single o-ring seals were tested at Reynolds by placing a tight seal at the bottom of the port, as in Figure 19, and then pressurizing the entire sealed port to 5 psi and monitoring any change in pressure with a pressure gauge. The maximum change in pressure for any OAV port was 3.2×10 −2 cc/sec, significantly below the 0.1 cc/sec leak-rate limit. For single and double o-ring seals that cannot be easily accessed or isolated, a different leakchecking procedure is used. The calibration tube stacks are assembled and attached to the AV, which is then pressurized to 10-15 cm water column with argon gas. Gas input routing and pressure control are centralized in a gas rack located in the SAB cleanroom. Gas output from the AD is run through an exhaust system to the SAB exterior. To prevent AV overpressuring from line kinks and other gas system malfunctions, safety relief valves are also placed at a number of locations in the gas system. A diagram of the gas system circuit can be seen in Figure 20(a). When the AVs are sufficiently pressurized, an argon sniffer can be used to check for leaks along the tube stack. The appearance of the IAV lid area during IAV tube stack leak checking can be seen in Figure 20(b). The sensitivity of this method is a minimum leak rate of 10 −3 cc/sec and is limited by the stability of the gas sniffer. The maximum measured leak rate measured using this method was 3×10 −3 cc/sec at one (a) Diagram of the gas circuit for AV leak checking. Gas is fed from bottles through a gas control rack and the into AV. Output air is pushed out of the AV and through an exhaust line to the exterior of the SAB. AV gas pressure is controlled through the use of pressure relief bubblers. (b) Photograph of the IAV tube stack leakchecking setup. An argon sniffer is run up and down the tube stack, which is pressurized with argon gas. location on on OAV1, well within the maximum specified Gd-LS-to-LS leak rate listed in Table 5. The leak rates in all other areas were below the sensitivity of the gas sniffer. The same method is used for measuring the leak rates in the OAV calibration tube stacks with similar sensitivity and results. A final test of the aggregate leak-tightness of the AVs was made after completing installation of the IAV and OAV tube stacks and SSV lid. Using the same gas circuit from the previous test, the Gd-LS and MO regions were pressurized with argon at 10 cm water pressure, while the LS region was filled with freon at 1 cm water pressure, to ensure minimal freon leakage during filling. Freon and argon concentrations were verified by measuring the lowered oxygen content of the exhaust air from the regions with an oxygen monitor. Once the LS region was composed of >75% freon, the pressure differentials were reversed, leaving the freon-filled LS volume pressurized with respect to the Gd-LS and MO volumes; this maximized any possible freon leakage. After 20 hours, the freon concentration of the AD1 Gd-LS and MO region exhausts were measured at 66 ppm and 52 ppm with a freon sensor; similar values were found for AD2. These values are below 159 ppm and 133 ppm, the maximum values of acceptable freon concentration in the Gd-LS and MO exhaust based on the AD leakage requirements. As was mentioned before, the acrylic overflow tank components are contained in a nitrogen gas environment. The leak-tight stainless steel housing that separates this gas volume from the water volume surrounding the AD and overflow region will not be discussed in this paper. Any possible leakage or splashing between overflow tank volumes is minimized by the acrylic covers on each volume. Installation After the OAV is cleaned in the semi-cleanroom, the OAV lid is temporarily reattached and the OAV is moved into the cleanroom. Using the 40 ton cleanroom crane and proper rigging, pictured in Figure 21, the OAV is lifted and lowered into a cleaned SSV located in the AD assembly pit. The OAV is set down on the bottom reflector, a 4.5 m diameter circular sheet of specularly reflecting material sandwiched between two 10 mm thick acrylic sheets. The reflector rests on the SSV bottom ribs. Surveys are then done on the OAV lid using a Leica System 1200 Total Station to determine if the OAV is level and if the OAV ports line up with the previously surveyed locations of the SSV ports [11]. The vessel is shimmed, re-situated, and re-surveyed until the SSV and OAV ports are acceptably aligned and levelled. Survey details and results will be further discussed in Section 6.3.6. When surveys are complete, stainless steel hold-down mechanisms are installed over the OAV lifting hooks to hold the OAV in place during all future AD activities. Once the OAV is in place, its lid is removed and the OAV flange is surveyed. Shims are then installed along with the IAV support pucks, to ensure that the IAV is level when installed. A cleaned IAV is lifted using the same lifting frame and slightly different rigging, and lowered into the OAV until it is resting on the pucks. The IAV ports and top are then surveyed to check if they are concentric with the other OAV and SSV ports, and repositioned and shimmed if necessary. This data is also included in the Section 6.3.6. Once proper alignment has been achieved, the hold-down mechanisms are engaged to secure the IAV to the OAV. A final survey of the IAV top is recorded to provide flatness and concentricity data for geometric characterization. A photo of the nested acrylic vessels can be seen in Figure 22. Assembled ladders of PMTs are installed after the securing of the vessels. Figure 22. A photograph of the assembly of a pair of ADs in the cleanroom. Assembly for the two vessels is at different stages in the photo: AD1, on the left, is shown with a nested IAV and OAV, while AD2, on the right, is shown with an OAV with lid attached. At this point, the IAV connection tubes are installed, leak-checked, and removed so that the OAV lid can be installed and its double o-ring seal leak-checked. The top AD reflector, which rests on top of the OAV lid, can then be installed. Next, OAV connection tubes are installed onto the OAV lid and are leak-checked. If all seals pass their leak-checks the SSV lid can then be installed. Once the SSV lid is secure, off-center OAV connection tubes are secured to the SSV, and IAV connection tubes are fed down through the OAV tubes and connected to the IAV ports. With the the lid and connection tubes properly installed, the final three-zone leak check can be performed. The overflow tank assembly is then assembled and connected to the central OAV and IAV tubes and SSV lid, completing the AV and overflow tank installation. A picture of overflow tank installation can be seen in Figure 8(b). Acrylic Vessel Characterization and QA Tests The acrylic vessels must undergo quality assurance testing to ensure that the vessels meet all engineering and physics requirements previously mentioned in this paper. The QA program described here measures all possible geometric, optical, and material properties relevant to engineering and physics considerations, as well as documenting all stages of the AV life cycles. Table 6 summarizes the measurements that have been made, and Figure 23 describes when during the AV life cycle each measurement was made. In this section, each AV characterization measurement will have its methods described and its results summarized. Category Vessel Table 6. Overview of vessel properties measured for quality assurance and characterization. Tests on the vessels were performed at Nakano, Reynolds, or at the SAB. Many tests on the raw material samples were conducted in labs at University of Wisconsin -Madison or National Taiwan University. Radioactivity measurements were done by Berkeley National Laboratory and California Institute of Technology, and the bulk of chemical compatibility information was provided by studies at Brookhaven National Laboratory. Design Requirements and Materials Selection Initial quality assurance and R&D measurements were done during the design stage to determine the correct material with which to build the acrylic vessels. Of particular concern were the optical, radioactive, and mechanical properties of candidate materials, and their compatibility with prospective target and gamma catcher liquids. Optical Properties The transparency requirements of the acrylics used in the IAV and OAV are dictated by the emission and transmittance spectrum of the AD liquids, as well as by the quantum efficiency spectrum of the Daya Bay PMTs [12,13]. From these inputs, it was determined that the IAV and OAV acrylic needs to be highly transparent to photons with wavelengths from 360 nm to 500 nm. The specifications on transmittance for a 10-15 mm sample in air for IAV and OAV acrylics for Daya Bay were 84% at 360 nm, 88% at 380 nm, 90.5% at 400 nm, and 91.5% at 500 nm. These values mirror those used in the SNO experiment. Acrylic sheets with transmittances of a few percent below specifications for lower wavelengths were also deemed acceptable as a majority of the scintillation light propagated in the Daya Bay liquids is above 400 nm. During the design phase, a variety of samples were tested using an SI Photonics Model 440 UV-Vis Spectrometer [14] to identify acrylic types that met these transmittance specifications. UVT acrylic meets these requirements, while standard UVA acrylics only begin transmitting light appreciably at 400 nm. Eventually, thin UVT acrylic sheets from Polycast were selected for the OAV walls, cast blocks of UVT acrylic from Reynolds were selected for the OAV lid and bottom, and PoSiang UVT acrylic was selected for the IAV. UVA acrylic was used for the overflow tanks and connection hardware, as these components are either small or far removed from the detector target. Transmittance data from the UVT acrylics are presented in Section 6.2.3. Radioactivity Testing Acrylic is an organic compound and does not contain many of the long-lived radioactive isotopes that are present in other optical window materials, such as glass. To ensure that radioactive backgrounds from the selected production acrylic types were acceptably low, samples of each type of acrylic were counted for radioactivity using a high-efficiency HPGe detector situated inside a lowbackground external radioactivity shield. Measurements were made with similar setups at either a concrete-shielded surface facility at Berkeley National Laboratory or at an underground laboratory at the Oroville Dam in Oroville, California [15]. Radioassay results are listed in Table 7. Note that the value for all measurements is only an upper bound, which is limited by the sensitivity of the equipment, the size of the sample, and the duration of the assay. In order to keep singles rates and correlated backgrounds at an acceptable level, it is desired that the intrinsic bulk radioactivity contribution from each AV component be lower than the radioactivity requirement for the 20 kg of Gadolinium salt used to manufacture the Gd-LS. Table 7 includes the measured radioactive isotope concentration upper limits for each acrylic type, as well as the isotope concentrations that will achieve the radioactivity requirement. For Polycast acrylic and for 40 K concentrations, these requirements have been met by the measurements. More precise measurements have been planned to lower the limits for Reynolds and PoSiang acrylics, so that low backgrounds in these acrylics can be confirmed. Table 7. Measured and required limits on concentration of radioactive isotopes in the production AV acrylics. Materials Compatibility It is important to test optical and mechanical compatibility between acrylic and candidate detector liquids, as many solvents and some liquid scintillators are known to cause crazing in stressed regions in acrylic. Mechanical compatibility testing was carried out in a manner described in the Handbook of Acrylics, the standard reference text on acrylics [16]; figures of the test setup can be seen in Figure 24(a). Strips of acrylic of well-known dimensions were held in position at one end and stressed on the other with a known weight. A fulcrum was placed underneath the middle of the acrylic strip; above the acrylic in the area of the fulcrum a filter paper was placed and saturated with the test liquid. The stressed acrylic was then observed over a period of time to observe any changes resulting from the liquid-acrylic contact. Pseudocumene, a liquid scintillator solvent used in other experiments such as KamLAND, caused the stressed acrylic piece to break within one hour of loading the acrylic. Linear alkyl benzene (LAB), a commonly used solvent for detergents, did not have any appreciable effect on the acrylic after 30 hours of stressing, making it a good candidate use at Daya Bay. Optical compatibility testing was done by placing acrylic samples into samples of candidate liquids for extended periods of time at elevated temperatures, to accelerate any possible leaching or other chemical interactions between the acrylic and liquid. The degradation rate with time of the LAB transmittance when exposed to acrylic was negligibly small after 21-28 days of testing at 40 • C. as LAB was the most optically and mechanically compatible liquid with acrylic, it was selected as the liquid scintillator for the production target and gamma catcher liquids. Mechanical Strength To ensure that the AVs are structurally sound over the life of the experiment, the vessels, like most other large acrylic structures, are designed for a maximum stress of 5 MPa and a lifetime of 10 years. A setup was constructed to independently test these limits by stressing acrylic samples at known amounts and observe the effects over time. A photograph of the setup is shown in Figure 24(b). Long, thin acrylic pieces of well-known cross-section are hung from a frame from one end while known weights are hung from the bottom of the acrylic, which creates uniform, well-known stresses in the main length of the acrylic. The first trial consisted of an acrylic strip stressed to 24 MPa. Significant crazing was viewed in the piece within 24 hours, and the piece broke after 48 hours. The second strip was stressed to 15 MPa, and moderate crazing was visible after 8 days of hanging. This piece was then removed and replaced with another strip that was stressed to 10 MPa. Minor crazing appeared after about 6 months and has persisted but not worsened appreciably after 2 full years of hanging. From these trials, the figure of 5 MPa over 10 years seems acceptable; vessels were designed to be supported such that stresses were well below the 5 MPa limit for all parts of the vessel structure. In addition, these tests show that short-term (on the order of a few days) stresses around 10-15 MPa are acceptable; this figure provides a guide to acceptable maximum stresses allowed during short-term processes such as shipping, lifting, and filling. Optical and Mechanical Characterization Visual Inspection By visually inspecting the acrylic vessels, features and defects that alter the acrylic's clear, completely transparent appearance can be identified. Careful visual inspection reveals bond lines where acrylic sheets are joined together with bonding syrup. Identification of bonds is made easier by placing polarized filters on either side of the vessel that are aligned perpendicularly. In this config-uration, birefringence effects cause bonds to appear lighter in color than the surrounding acrylic. The joining of two acrylic sheets is pictured in Figure 25. By comparing the visible bond line pattern to the designed vessel plans, it was confirmed that the vessels were constructed as designed. Visual features are largely found along bond lines. Isolated groups of small bubbles caused by the exothermic curing reaction are visible in a few spots, especially along the middle horizontal bond joining the two vessel barrel cylinders. Short vertical crazing marks, mostly less than 1/4" long and less than 1/16" deep were also visible at places on or just above the same bond; an example is shown in Figure 26. The cause of this minor crazing is likely excess pressure placed on the inside of the wall cylinders while bonding them together: in order to hold the bonding syrup between the cylinders, dams needed to be pushed firmly against the sides of the walls. These slightly crazed areas are not a concern, as the stress that caused them is no longer being applied. In addition, they occur in areas of low expected stress under normal conditions. Areas of more significant crazing were sanded out and filled with bonding syrup. Minor chips and bonding and machining features were also visible but not worrisome. Surface Properties The surface of OAVs and IAVs were polished to 1 to 3 micron grit by Reynolds and Nakano. This gives the vessels a largely transparent finish with small halos of diffusely reflected light visible from certain perspectives, as seen in Figure 27. The IAVs appear to have a slightly more diffusive surface than the OAVs. However, these surface effects will largely disappear when the vessels are immersed in liquids. Figure 28 illustrates the disappearance of surface features and roughness when acrylic samples are immersed in liquids of similar index of refraction. Exact index of refraction values will be discussed in Section 6.2.4. The only minor surface defect present in the acrylic vessels was on OAV1. The Genmac plastic wrap used to protect the outer surface of the vessel during shipping was left on the OAV1 bottom after unpackaging. Over four months of storage, the plastic wrap degraded, leaving a sticky film in spots on the bottom of the vessel. In lieu of using nonpolar compounds whose compatibility with the acrylic was unknown, brass brushes were used to abrade the surface of the OAV and remove the sticky residue. The surfaces in these abraded areas is more diffusive than other areas, while still mostly transparent. We qualitatively estimate that 10-20% of the bottom surface of OAV1 is abraded. These spots disappeared as OAV1 was filled with liquid. Transmission Spectrum Transmittances were measured for each individual production acrylic sheet used in all IAVs and OAVs. This was done to ensure that poorly transmitting sheets were not included in the vessels, (a) Acrylic sample polished to 400-grit, in air. (b) Acrylic sample polished to 400-grit, immersed in mineral oil. which would have caused asymmetries in detector response. OAV acrylic sheet transmittance measurements were done using an SI Photonics Model 440 UV-Vis Spectrometer and a Cary Model 300 UV-Vis Spectrometer. The transmittance of the OAV bonds was also measured using the same equipment. IAV acrylic sheet transmittance measurements were done using a PerkinElmer Lambda 650 UV-Vis Spectrometer. A typical transmittance spectrum for the Reynolds and Polycast acrylics used in the OAV and PoSiang acrylic used in the IAV can be seen in Figure 29. It was found that all supplied sheets and bonds were optically acceptable for use in the Daya Bay experiment. The systematic uncertainties in the transmittance measurements are of the order of 1%. Uncertainties arise largely from geometric differences between acrylic samples, as well as from the precision of the spectrometers. Index of Refraction and Attenuation Length Attenuation length and index of refraction are intrinsic material properties, as opposed to transmittance, which is thickness-dependent. Thus, in order to properly model the Daya Bay detectors, the acrylics' indices of refraction and attenuation lengths must be known. Light transmission through acrylic and reflection at the acrylic surface are determined by the Fresnel equation and by the Beer-Lambert law, respectively, R = (n − 1) 2 (n + 1) 2 (6.1) T = e −x/Γ , (6.2) where n is index of refraction, x is the material's pathlength, and Γ is the attenuation length of the material. In the case of parallel-plane interfaces, such as the acrylic vessel walls, the picture is complicated by multiple internal reflections. In these cases, the measured transmittance and reflectance of a sample with parallel edges are T * = (1 − R)T 1 − R 2 T 2 (6.3) R * = R(1 + T T * ),(6.4) with T and R as given in Equations 6.1 and 6.2. By measuring T * and R * , one can numerically solve for T and R. This measurement method is called an RT measurement, and has been used to determine the optical properties of UVT acrylics in the past [17]. R* and T* measurements for one sample of PoSiang acrylic were done using a Perkin-Elmer Lambda 650 with a 60 mm integrating sphere, which could be set to measure reflectance or transmittance. The 0.006 uncertainty on the index of refraction measurement was imposed by the systematic uncertainties in the T* and R* measurements. Results for particular wavelengths can be seen in Table 8. As a cross-check, another sample from another sheet of PoSiang acrylic was sent, along with two Polycast samples and one Reynolds sample, to have their indices of refraction measured with a V-block refractometer by Schott North America [18]. This measurement could be performed at wavelengths above 465 nm with an uncertainty of <0.001. One can see good agreement between the RT and Schott measurements. In addition, the low-wavelength RT measurements were in good agreement with low-wavelength RT measurements made in [17]. It was also noted that all samples displayed very similar indices of refraction: the variation in R resulting from differences in index of refraction of this level are negligible, on the order of 10 −7 . Acrylic Type Wavelength (nm) RT-Measured n Schott-Measured n Table 8. Index of refraction measurements using the RT method and measurements procured from Schott NA. Systematic uncertainties on measurements are 0.006 and <0.001, respectively. Given the non-variability of the index of refraction in all acrylics in the study, we treat all acrylics as having identical indices of refraction for the purposes of the Daya Bay Experiment. With the index of refraction spectrum determined, attenuation lengths for any individual sheet were calculated using the transmittance measurement done for optical QA. Attenuation length was calculated for the best-and worst-performing sheets from the transmittance testing, to establish the range of attenuation lengths for all acrylic types. These spectra can be seen in Figure 30. It should be noted that the difference between the best and worst performing samples for each acrylic type are at least partially caused by the systematic uncertainties from the transmittance measurement. Acrylic sheet selection and placement in the OAVs and IAVs was uncorrelated with sample optical performace, ensuring no systematic detector-to-detector variations in light yield would arise from this source. Monte Carlo simulations have shown that the average acrylic pathlength of optical photons in the Daya Bay detector is a few cm. This means that the presented range of attenuation lengths would result in percent-level or lower position-related light yield asymmetries between detectors, and negligible overall light yield difference between detectors. Ultraviolet Degradation It was been demonstrated that UVT acrylics optically degrade when exposed to high levels of UV light [12]. If such degradation takes place in the AVs during or after construction through exposure to sunlight or excessive factory lighting, the loss of transmittance would not be measured in the QA program, resulting in an unexpected decrease in photoelectron yield during detector operation. In addition, UV blacklights used to cure acrylic bonds on the IAVs can also be sources of optical degradation. To quantify these issues, the rates of degradation of the three types of OAV and IAV acrylics were measured. Based on these rates, safeguards to avoid dangerous UV exposures were implemented. The degradation QA program consists of taking two samples of each type of acrylic used in the IAVs and OAVs and subjecting one of each type to either direct sunlight for a period of 30 days, or to UV blacklight-blue light for a period of 30 days. The transmittance of each sample is measured at numerous times throughout the course of the exposure; the measurements for Reynolds acrylic can be seen in Figure 31. At long timescales, degradation effects are very apparent. One can also see UV curing effects where transmittance initially improves at short wavelengths. These curing effects dominate in the UV blacklight samples, and no long-term degradation is visible. Since curing effects are only at low wavelengths, where the Daya Bay liquids emit little light, UV bond curing has negligible effects on light yield for the Daya Bay detectors, and thus likely acceptable practice for most liquid scintillation experiments ultilizing acrylic. Using UV meters and weather monitoring websites, total UV dosage can be calculated for each period of sunlight exposure. Then, by combining the AD liquid emission spectrum with the measured transmittances, a relationship between UV dosage and light yield loss can be established, as in Figure 32, for sun exposure. By fitting these points one can procure the decay rates of the acrylic and calculate acceptable UV exposure times for each component based on the expected average acrylic pathlength of optical photons in the Daya Bay detector. Acceptable exposure time is defined as the amount of time in which UV exposure caused a 2% decrease in total estimated light yield from one vessel. These values for each component are listed in Table 9. Table 9. Acceptable exposure times for the OAV and IAV in various environments. The factory limits are quoted with and without UV-filtered windows. Exposure times are kept underneath these values by altering shipping and storage procedures accordingly. Acrylic components are kept under plastic wrap or tarps while not being directly worked on. The vessels or individual sheets are never kept outside uncovered. UV filters have also been installed on windows of the buildings in which the vessels were manufactured, stored, cleaned or installed. Acrylic Vessel Stress Testing As mentioned before, the specification for maximum acceptable long-term stress on any area of the AV is 5 MPa. While Reynolds and Nakano annealed the vessels to reduce residual stress levels, acrylic stress measurements were not performed by Reynolds on the OAVs. To evaluate residual stresses, a method for quantitatively measuring stress on the constructed vessels using a StrainOptics PS-100-LF-PL polarimetry system was developed [19]. Figure 33. Photographs of the two components of the stress-measurement system: the larger illuminator and smaller analyzer. Also pictured is a demonstration of the system's use, as well as the effect of its use: birefringence made visible in a OAV prototype lifting tab resulting from residual stresses. The system utilizes the change in index of refraction with introduced stress experienced by acrylic to retard components of polarized light in stressed acrylic regions. This creates interference patterns, which can be quantitatively linked to stress values using a system of rotating quarter wave plates. The entire system needs to be rotated into alignment with the principal stress axis of points of suspected high stress, and thus a rotating frame was developed for both system components. The two system components can be seen in Figure 33, along with a fringe pattern exhibited by residual fabrication stresses in the OAV lifting hook. The stress measurement system was calibrated using the acrylic stressing setup described in Section 6.1.4. The device was then brought to Reynolds to be used during QA testing of the prototype and production OAVs. Stress-testing results are listed in Table 10. The uncertainty in the provided measurements are dependent on the thickness of the acrylic sample, but it most cases is less than 10%. All measured values are below the 5 MPa long-term maximum stress limits. Qualitative stress testing was performed on the IAVs by rotating polarized filters on either side of an acrylic sheet to look for areas of high birefringence. Location on OAV Highest Measured Stress (MPa) Lifting Tabs 1.3 Bonds 3 Crazed areas along equator bond ∼0 Lid ribs with lid unattached to OAV 3 Table 10. Stress levels measured in areas of concern for OAV1 and OAV2. The highest measured stress in each location on either OAV is quoted here. The main drawback of this measurement system is that it measures integrated stress through the entire thickness of an acrylic part, making it impossible to measure stresses on individual sur-faces in some parts of the OAVs. For example, the slightly crazed areas near the OAV equator bond, pictured in Figure 25, despite being annealed, should exhibit residual stress from the outward force applied to hold dams around the equator during the bonding process. However, this type of force may exert a tension stress on the OAV outside and a compression stress on the OAV inside in that area, which cause birefringence in opposite directions. The net effect is that the compression and tension integrate through the piece as the polarized light traverses the wall, resulting in no net observed birefringence, despite the existence of stresses. This is the case in Figure 25; microcrazing is present in the black area above the visible bond line, and no microcrazing was identified below the bond line. Yet, both areas are exhibit the same lack of birefringence. This drawback does not affect the validity of the measurements on the lifting tabs, bonds, or lid ribs, as the stresses experienced in these areas would be mostly uniform through the entire length of acrylic traversed by the polarized light. Geometric Characterization The geometric parameters of the Daya Bay AVs must be measured to ensure identicalness of detector shape and target mass between detectors, and to make sure the detector will fit together properly during assembly. Dimensional parameters were measured at either Reynolds or Nakano, while the mass and survey data were made at the Daya Bay SAB. Wall thickness The designed thickness of the OAV barrel, top, and bottom is 18 ±5.2 mm; those values for the IAV are 10 ±1 mm for the barrel, and 15 ±1 mm for the bottom and top. Barrel wall thickness measurements are made at various heights and angles around the vessel with an ultrasonic thickness gauge. Data for IAV and OAV barrels can be seen in Figure 34 The average vessel wall thickness was 16.8 ±1.3 mm and 16.4 ±1.2 mm for OAV1 and OAV2 and 10.55 ±0.98 mm and 10.69 ±0.8 mm for IAV and IAV2. These values are within the acceptable thickness range. There were a few isolated areas, mostly along bonds, where thickness was below tolerance by 1-2 mm; these areas accounted for less than 1% of the total barrel surface area, and are not a concern. Vessel thinness in bond areas likely originates in the sanding and polishing process. When sheets are bonded together and polished, any surface discontinuities from uneven sheet alignment or shape are evened out, which, in some cases, results in the removal of a significant amount of acrylic material. Less systematic checks of OAV and IAV top and bottom thickness were also done, showing no deviations from tolerances. Diameter Designed outer diameters are 4000 ±4.8 mm for the OAVs and 3120 ±5 mm for the IAVs. Outer diameters were measured at various heights on all four AVs using a pi tape measure. Outer-diameter data for all four AVs can be seen in Figure 35. Average diameter was 4002.4 ±0.9 mm and 3997.4 ±0.5 mm for OAV1 and OAV2 and 3123.1 ±1.2 mm and 3123.1 ±2.0 mm for IAV1 and IAV2. The systematic 0.5 cm diameter difference between OAVs is within engineering tolerance, and will not have a significant impact on detector response. For the OAVs, inner diameter was also measured to ensure that the vessels had a sufficiently circular cross-section. The specified inner diameter is 3964 ±13.6 mm, and was measured with a Leica laser distance measure. Average values were 3968.6 ±7.8 mm and 3965.3 ±5.04 mm for OAV1 and OAV2, respectively. These values match well with the outer diameter measurements in showing that OAV1 is slightly larger than OAV2. The large majority of measurements were well within tolerance. The inner vessels were not designed to be entered and exited, so inner diameter measurements were not made. Instead, the horizontal cross-section of the vessel was checked by measuring angles around the outside of the IAV across the bottom inside of the IAV. The angle between two positions across the AV from one another as measured from a location halfway between those points along the AV's circumference was measured to be between 89.8 • and 90.1 • , well within the specified range of 89.2 • and 90.8 • . Height Specified heights, defined as the distance from the AV bottom to AV flange for the OAV and from the AV bottom to AV top edge for the IAV, are 3982 ±3.0 mm for the OAVs and 3100 ±5.0 mm for the IAVs. Heights were measured on all vessels with a tape measure, and can be seen in Figure 36. The average height values were 3981.0 ±1.8 mm and 3980.25 ±4.7 mm for OAV1 and OAV2 and 3106 ±1.31 mm and 3101 ±1.55 mm for IAV1 and IAV2. One can see that IAV1 is on average taller than IAV2 by 0.5 cm. Some of OAV2's heights were out of spec by a few mm, but this difference was deemed acceptable for the OAVs. Perpendicularity and Profile To test for straightness of the OAV barrel as well as the barrel's perpendicularity with respect to the bottom of the OAV, a plumb bob test was done. First, a plumb bob was hung from a string taped to the top of the OAV flange. The distance between the string and vessel walls was then measured. The specification on the overhang of the OAV flange over the outside of the OAV walls was 40 ±10 mm. Results from OAV1 can be seen in Table 11. Table 11. Measured distance between a plumb bob string and the side of OAV1 in mm. The specification on this value is 40 ±10 mm. The proper orientation of the bottom and the proper wall shape is reflected in the plumb bob data: almost all measured vessels were within the specification. Minor wall features are described by the data: for example, at 270 • , the vessel bulges outward in the middle by about 1 cm, while at other angles, like 180 • , the vessel middle bulges inward. None of these features are expected to affect appreciably the physics response or mechanical stability of the detector. The measurements from OAV2 are also similarly within tolerance. O-Ring Grooves To make sure that the vessels do not leak, dimensional measurements of the OAV o-ring groove were carried out separately from the standard leak-checking regimen. If the groove is too deep, there will not be enough pressure on the o-ring to seal properly, while if it is too shallow, the o-ring will be overcompressed and may lose its elasticity. The height and width of the two concentric OAV flange o-ring grooves need to meet specifications, which are 4.4±0.3 mm and 6.9±0.3 mm, respectively. O-ring groove dimensions were measured at various angles with calipers accurate to 0.1 mm. Average o-ring groove width was 6.7±0.2 mm and 6.8±0.3 mm for OAV1 and OAV2, respectively. While widths were in tolerance, o-ring groove depths were initially too shallow, caused by the removal of excess material on the OAV flange during sanding and polishing. This was fixed by Reynolds before shipment by making a specially-designed sanding tool to remove more acrylic from the bottom of the grooves. The grooves were then remeasured at an average depth of 4.4±0.3 mm. O-ring grooves were cut deeper on future OAVs to compensate for the loss of flange material during polishing. Vessel Positioning: In-Situ Survey During installation, position surveys are done on all major AD components to assure that they are placed correctly in the AD. The surveys are done using a Leica System 1200 Total Station. Any group of survey measurements are started by placing the Total Station and tripod at a fixed location in the cleanroom. The coordinate system is set by taking the coordinates of a fixed reference point. The Total Station is then aimed at retroreflectors secured to the desired survey locations. The Total Station automatically locates the retroreflector using a laser sight and records x-y-and z-coordinates. Using this method, surveys were done of points on the SSV and OAV bottom, sides, and lid, and on the IAV top. The azimuthal orientations of the OAVs and IAVs are dictated by the alignment of the SSV, OAV, and IAV calibration ports. The flexibility of the bellows connecting the calibration and overflow systems to the AVs allows a small amount of acceptable misalignment. The port location survey data is pictured in Table 12. The dimensions presented in this table are illustrated in Figures 38 and 37. All ports are aligned to within 5 mm with the exception of the IAV1 Gd-LS port, which is misaligned by 2 cm. While not ideal, the connection hardware was able to flex to make up for this larger than expected misalignment. The Z-positions of the various ports relative to the SSV top are mostly consistent between detectors to within 1 cm. The discrepancy between SSV central port Z position relative to the SSV top is likely a relic of using different retroreflector stands for each SSV. It is also interesting to note that the central port of each OAV appears to be around 1 cm lower than that OAV's LS port, indicating deflection of the OAV lid under its own weight. This sag will be reduced when the detector is filled with liquids. The shapes of the LS and MO regions in the as-built antineutrino detectors are partially determined by the physical positioning of the OAV and IAV. AV positioning information consists of levelness, Z, and R positions of the IAV, OAV, and SSV, and is provided by survey data gathered Table 12. Port alignment data. The magnitudes of x-y plane offsets for AVs are given with respect to the corresponding SSV ports. The z-positions are all given with respect to the top of the SSV lid center. by the method described above. The survey data is analyzed to give the horizontal and vertical positions of the calculated centers of specific surfaces on the AVs and SSV relative to the calculated center of the SSV lid. These dimensions correspond to the horizontal and vertical distances between point A and points B, C, D, and E in Figure 38. Table 13 presents an overview of the surveyed vessel offsets between point A and points B, C, D, and E in cylindrical coordinates: magnitude of Z offsets of the centers of given vessel surfaces are presented along with magnitudes of X-Y offsets. In addition, levelness is described by the difference in absolute Z-coordinate between one side of a vessel and the other. The vessels are largely concentric, having less than 5 mm offset in most cases. Z-positions of the various surfaces are in close agreement between ADs, and tilts of these surfaces are all around 5 mm or less. Survey data was also taken to determine how well the IAV position was constrained within an OAV; While the IAV hold-downs constrain the IAV in z and θ , it is the IAV ribs interfacing with Figure 38. Depiction of an AD with offset AVs that shows key reference points (B, C, D, and E) whose vertical and horizontal distances from the calculated SSV lid center (point A) are calculated from the AV survey data. the OAV rib stops that constrain the IAV's position in r. The difference in length between the IAV bottom ribs and the OAV rib stop gap was surveyed and then decreased to an acceptable 3 mm difference by affixing teflon spacers to the IAV ribs ends. Vessel Mass The most accurate way to determine the total amount of acrylic in the antineutrino detectors is to measure the acrylic vessels' masses. This data also provides a measure of the identicalness of the acrylic vessels in pairs of detectors. The mass of each acrylic vessel was measured in between the cleaning of the vessels and their installation in the neutrino detectors in the Daya Bay Surface Assembly Building. While the vessel was lifted using a 10-ton crane, a Dillon ED-Junior crane scale placed in series between the crane and the acrylic vessel measured the mass of the vessel. The total measured masses are 1847±7 kg (OAV1), 1852±7 kg (OAV2), 907±7 kg (IAV1), and 916±7 kg (IAV2). Within the uncertainties of the measurement the masses are largely consistent with one another, indicating very equal amounts of non-scintillating material in the center regions Table 13. As-built vessel surface offset data, listed in cylindrical coordinates as X-Y offset magnitude, X-Y offset angle, and Z offset. The coordinate system is set such that the top of the SSV lid center is (0,0,0). Values are obtained from aggregating data from numerous survey locations. Uncertainties on these aggregated values are ±3 mm. of the filled ADs. Vessel Volume By combining a weight measurement with a measurement of the density of each type of acrylic, the total volume of acrylic in each detector can be calculated. The density of the three different acrylic types were determined by measuring the mass and volume of acrylic samples using a scale and caliper, respectively. The density of all acrylics used in the construction of the vessels were measured to be 1.19 g/cm 3 , with an error of 0.01 g/cm 3 . Given this value, the total volumes are 1.552±0.014 m 3 (OAV1), 1.558±0.014 m 3 (OAV2), 0.762±0.009 m 3 (IAV1), and 0.770±0.009 m 3 (IAV2). Expected Liquid Volumes The mass of the Gd-LS and LS will be measured to 0.1% or better during the filling process using load cells and Coriolis flowmeters. Prior to the filling of the AD we can make an estimate of the AD inner volumes using the geometric data of the acrylic vessel characterization. We use the averaged geometric value as the dimension, with the standard deviation of that value as the uncertainty. These values are listed in Table 14. For the Gd-LS region, we use the measured dimensions of the vessels along with the design value for the conical top of the vessel; we assign an uncertainty of 5% to this latter value. We obtain 23.44±0.027 m 3 Summary We have successfully performed the design, fabrication, characterization, shipment, and installation of the acrylic vessels for the first two antineutrino detectors for the Daya Bay experiment. Nested pairs of acrylic vessels are a novel approach to the construction of a 3-volume detector filled with scintillating liquids for the precision measurement of reactor antineutrinos. An extensive production monitoring and quality assurance program ensures that the arylic vessels meet the technical and scientific requirements for the Daya Bay experiment. The nested acrylic vessels exhibit a high degree of identicalness. This is a critical feature of the pair-wise measurement of reactor antineutrinos at near and far distances from the Daya Bay reactors. The optical properties and as-built dimensions and positions of the vessels have been well characterized. This is a prerequisite for obtaining sub-percent systematic uncertainties in the measurement of θ 13 . Figure 1 . 1An bird's-eye illustration of the detector and reactor locations of the Daya Bay Experiment. Distances from the reactors to the near experiment halls are 300-600 m; distances from the reactors to the far hall are 1600-2000 m. Figure 2 . 2Cutaway view of a fully constructed antineutrino detector. Figure 3 . 3Transparent model of the complete OAV, IAV, and overflow tank system. The IAV is nested inside the OAV, with the overflow tank appearing above both AVs. The overflow tanks are outside the main detector volume and above the steel lid. The mineral oil overflow tanks are part of the stainless lid and not shown here. of the OAV flange double o-ring seal, including leak check port. (b) Photograph of the as-built OAV flange-lid seal, including leak check port and plug. Figure 4 . 4Close-up drawing and photograph of the OAV flange double o-ring seal connecting the OAV lid Figure 5 . 5Detailed close-up of connections between the off-center AV calibration ports and connection hardware on the top of the stainless steel tank lid. Parts are color-coded to help differentiate between individual pieces. O-ring locations are also pictured. (b) A depiction of the filling scenario giving rise to the stresses calculated via FEA in the adjoining figure. Figure 6 . 6FEA results for a scenario in which the liquid level is 30 cm higher on the inside of the OAV than on the inside. of overflow tank and connection hardware. (b) Photograph of an as-built overflow tank. The two overflow regions are clearly visible. Figure 8 . 8Close-up view and photograph of the overflow tank system, including OAV/IAV connection hardware. The parts are color-coded to emphasize the individual pieces. The stainless steel overflow tank cover and spider ring are not pictured in either drawing or photograph. Figure 9 . 9Cross-section of a filled AD. The path from the LS and Gd-LS bellows through to the overflow tanks are visible in this figure. Also note the LS and Gd-LS filling up their respective calibration ports. locations on the OAV barrel. Top cylinder bonds are dashed, bottom cylinder bonds are dotted, and circumferential bonds are solid. (b) Bond locations on the OAV bottom. Solid lines indicate bonds connecting the ribs and hub to the main vessel. Dotted lines indicate the bottom-barrel bond and the bond joining the two bottom sheets. All colored parts are also bonded onto the main structure. (c) Bond locations on the OAV top: directly through the lid middle and around the OAV lid ports. Figure 10 . 10Approximate bond locations on the OAVs. ( a ) aImage of fabricated OAV top after cleaning. Notice the extra calibration port for the LS region. (b) Image of fabricated IAV top during installation. Notice in particular the bonded lid, unlike the OAV. (c) Image of an OAV bottom after finishing fabrication. Notice that the ribs and extra features are on the OAV inside. (d) Image of a fabricated IAV bottom during cleaning. locations on the IAV barrel. Vertical bonds are solid and circumferential bonds are dotted. The top circumferential bond connects the IAV barrel to the IAV top, which is not pictured. (b) Bond locations on the IAV bottom. Dotted lines indicate bonds connecting the ribs and hub to the main vessel. Dashed lines indicate the bonds joining the four main bottom sheets. The hub is also bonded to the IAV bottom. (c) Bond locations on the OAV top. Bond line indications are similar to that of the IAV bottom, with the addition of solid lines indicating bond connecting the IAV top ports. Figure 12 . 12Approximate bond locations on the IAVs. Figure 13 . 13A map of the entire shipping route for OAVs and IAVs. Circles indicate the origin locations of OAV (Grand Junction, CO, USA) and IAV (Taipei, Taiwan), triangles indicate the ports of Yantian and Long Beach, through which the AVs travel, and the square indicates the final destination, Daya Bay, China. ) ISO drawing of an OAV in its shipping container. The lifting frame is also included in this drawing.(b) A photograph of a production OAV in its shipping frame. Figure 14 . 14An isometric drawing and photograph of the OAV shipping frame. Figure 17 . 17Temperature and pressure of the OAV environment during transport. Note that the changes of temperature and pressure with time convey the vessel's current general location. over the course of 12 hours. Another temperature sensor located within the OAV packaging on its lid experienced temperature variations of only 20 • C. Figure 19 . 19Photograph of the double o-ring seal leak-checking setup for the OAV ports. The yellow seal at the bottom of the port is clearly visible, as is the gas input. Figure 20 . 20An overview of the gas circuit and appearance of the leak-checking system in the Daya Bay SAB cleanroom. of a lifted OAV. (b) Photograph of the OAV just before lifting in the SAB. Figure 21 . 21A drawing and photograph of the OAV rigged for lifting. Visible are the lifting fixture and the rigging connected to the OAV via the OAV lifting hooks. Figure 24 . 24of mechanical compatibility tests of various candidate target liquids with acrylic.(b) Photograph of an acrylic stress testing setup. The acrylic is the long clear bar connecting the top metal hitch to the bottom weight Photographs of two different stress testing setups, one for testing material compatibility, and one of testing maximum stress limits. Figure 25 . 25Photograph of a horizontal bond-line, viewed through perpendicular polarized filters. Microcrazing is present above this bond line but its visible appearance is masked by the polarized filters. Figure 26 . 26Microcrazing visible above the equator bond of OAV2. Backlighting and indirect-angle viewing of the crazed area makes it easier to view. Figure 27 . 27Surface appearance of OAV. Halos of diffusely reflected light are visible at the top of the OAV. Despite this, the overall surface quality of the vessel is good, giving the vessel a largely transparent appearance. Figure 28 . 28A comparison of acrylic surface appearance when surrounded by two mediums of different refractive index, air (n∼1) and mineral oil (n∼1.36). Acrylic has n∼1. Figure 29 . 29Typical transmittances for PoSiang, Polycast, and Reynolds UVT acrylics. Sample pathlengths are given in the legend. To correct for light collection systematics, Spectra were normalized to give 92% transmittance at 650 nm, as would be expected for perfect transmittance from a parallel-plane acrylic sample in air. Figure 30 . 30Best and worst case attenuation length spectra for each type of IAV or OAV acrylic. The best and worst values are the product of the transmittance measurement uncertainties as well as actual variations in acrylic attenuation length. For PoSiang, only one sample's transmittance is pictured. Figure 31 . 31Degradation of Reynolds acrylic over time. Here, one can see in particular which wavelength ranges are most affected by UV degradation. Figure 32 . 32Estimated transmittance over the entire wavelength spectrum versus UV dosage for all three types of OAV and IAV acrylic. The degradation rate is represented by the slope of a line fitted to these points. ) OAV thickness deviation with height on the OAV. Measurements of various angles were aggregated to make each data point. ) IAV thickness deviation with angular position around the IAV edge. Measurements of various heights were aggregated to make each data point. Figure 34 . 34IAV and OAV thickness measurements for both vessels, measured at various AV heights and angles. The error bars indicate the standard deviation of all thickness measurements made at the given height or angle. Nearly all measurements are within the tolerances indicated by the grey band in each figure. OAV diameter variations with height. A small but clear difference in diameter is visible. IAV diameter variations with height. Figure 35 . 35IAV and OAV diameter measurements for both vessels, measured at various heights on the AVs. Only one measurement was taken at each height. Tolerances are indicated by the grey band in each figure. IAV height variations with angle around the IAV outside edge. Figure 36 . 36IAV and OAV height measurements for both vessels, measured at various angles on the AVs. Only one measurement was taken at each height. Tolerances are indicated by the grey band in each figure. Figure 37 . 37Top view engineering drawing showing exaggerated offsets between IAV and OAV ports. The offset pictured here is made greater than the actual measured offsets so that misalignment of the ports is easily visible. Vessel Central Port Gd-LS Calib. Port LS Calib. Port Magnitude (mm) Z (mm) Magnitude (mm) Z (mm) Magnitude (mm) Z (mm) SSV1 Table 1. Breakdown of the Daya Bay detector systematic uncertainties. From [1]Number of protons 0.8 0.3 0.1 Detector efficiency Energy cuts 0.8 0.2 0.1 Time cuts 0.4 0.1 0.03 H/Gd ratio 1.0 0.1 0.1 n multiplicity 0.5 0.05 0.05 Trigger 0.01 0.01 0.01 Live time <0.01 <0.01 <0.01 Total uncertainty 1.7 0.38 0.18 Table 4presents rough estimates for acceptable amount of possible contaminants based on their activity.Contaminant Source Isotope Acceptable Amount Al 2 O 3 Powder Spot-Scrubbing 40 K 36 kg Fingerprints Human Contact 40 K > 1000 handprints Human Sweat Human Contact 40 K < 1 cm 3 Bamboo or Wood Used for Scaffolding 40 K < 3 cm 3 Dirt Surrounding Environment 40 K, 232 Th, 238 U 2 g Paint Chips Semi-Cleanroom Crane 40 K, 232 Th, 238 U < 50 g Table 4 . 4Possible radioactive contaminants in the OAVs and IAVs, listed with their maximum acceptable post-cleaning presence. VesselMagnitude (mm) Angle (deg) Height (mm) Tilt (mm)SSV1 Top - - - 4 SSV2 Top - - - 2 OAV1 Flange 2 257 542 <4.4 OAV2 Flange 7 39 546 <4 IAV1 Lid Edge 3 256 970 5 IAV2 Lid Edge 2 259 964 6 OAV1 Bottom 2 0 4515 4 OAV2 Bottom 4 231 4514 3 SSV1 Bottom 3 352 4556 3 SSV2 Bottom 5 186 4557 2 and 23.40±0.027 m 3 as the nominal inner target vessel volumes for the Gd-LS in AD1 and AD2, respectively. This value, along with the previously calculated acrylic volume and OAV dimensions, can be used to deduce the LS volumes. The OAV bottom rib volume was estimated based on the as-designed values, with an uncertainty of 5%. The calculated LS volumes are 25.18 ±0.053 m 3 and 25.05±0.076 m 3 for the LS in AD1 and AD2, respectively. The estimated uncertainties of the Gd-LS and LS volume from the geometric characterization before filling are 0.15% and 0.2-0.3%, respectively. Unc. Value Unc. Value Unc. Value Unc.Table 14. Aggregated average values of major detector parameters for AD1 and AD2 AVs. Uncertainties are quoted along with each parameter. The volume inside parameter describes the Gd-LS volume for each IAV and the LS volume for each OAV.Parameter IAV1 IAV2 OAV1 OAV2 Value Thickness (mm) 10.55 0.98 10.69 0.8 16.8 1.3 16.4 1.2 Diameter (mm) 3123.1 1.2 3123.1 2.0 4002.4 0.9 3997.4 0.5 Height (mm) 3106 1.31 3101 1.55 3981 1.8 3980.2 4.7 Mass (kg) 907 7 916 7 1847 7 1852 7 Volume (m 3 ) 0.762 0.009 0.770 0.009 1.552 0.014 1.558 0.014 Volume Inside (m 3 ) 23.44 0.027 23.40 0.033 25.18 0.053 25.05 0.76 AcknowledgmentsThis work was done under DOE contract in the US and NSC and MOE contract in Taiwan, with support of the University of Wisconsin Foundation and National Taiwan University. We are grateful to Reynolds Polymer Technology, Inc., of Grand Junction Colorado for their great support in engineering, fabrication, and R&D of the outer acrylic target vessels for the Daya Bay reactor antineutrino experiment. We are also grateful to Nakano International, Ltd, of Taiwan for their support in the construction of the inner acrylic target vessels. . Q R Ahmad, SNO CollaborationarXiv:nucl-ex/9910016Nucl. Inst. Meth. A449. 172Ahmad, Q. R. et al. (SNO Collaboration), Nucl. Inst. Meth. A449 172 (2000). arXiv:nucl-ex/9910016 607 Hollingsworth St. Reynolds Polymer Technology, Inc, Grand Junction, CO 81505 USAReynolds Polymer Technology, Inc., 607 Hollingsworth St. Grand Junction, CO 81505 USA. http://www.reynoldspolymer.com/ 5001 Spring Valley Road Suite 400 East. Inc Coroplast, Dallas, Texas 75244 USACoroplast, Inc., 5001 Spring Valley Road Suite 400 East, Dallas, Texas 75244 USA. http://www.coroplast.com/ Gmbh Msr Electronics, CH-8444 Henggart Switzerland. 16MSR Electronics GmbH, Oberwilerstrasse 16, CH-8444 Henggart Switzerland. http://www.msr.ch/en/ 827 Hollywood Way. Telespial Systems, Inc, Burbank, CA 91505 USATelespial Systems, Inc., 827 Hollywood Way, Burbank, CA 91505 USA". http://www.trackstick.com/ . Instrumented Sensor Technology, Inc, Okemos, MI 48864 USA4704 Moore StreetInstrumented Sensor Technology, Inc., 4704 Moore Street, Okemos, MI 48864 USA. http://www.isthq.com/ A G Leica Geosystems, Heinrich Wild Strasse, CH-9435 Heerbrugg. St. Gallen SwitzerlandLeica Geosystems AG, Heinrich Wild Strasse, CH-9435 Heerbrugg, St. Gallen Switzerland http://www.leica-geosystems.com/en/ . B R Littlejohn, arXiv:ins-det/0907.3706JINST. 49001B. R. Littlejohn, et al, JINST 4 T09001 (2009). arXiv:ins-det/0907.3706 Hamamatsu Photonics, Electron Tube Divison, 314-5 Shimokanzo. Iwata City, Shizuoka PrefHamamatsu Photonics, Electron Tube Divison, 314-5 Shimokanzo, Iwata City, Shizuoka Pref., Japan. http://www.hamamatsu.com/ . Si Photonics, Inc, W. Prince Rd., Ste. 38Tucson, AZ 85705 USASI Photonics, Inc., 1870 W. Prince Rd., Ste. 38, Tucson, AZ 85705 USA. http://www.si-photonics.com/ . Yuen-Dat Chan, private communicationYuen-Dat Chan, private communication. Handbook of Acrylics for Submersibles, Hyperbaric Chamber, and Aquaria. J D Stachiw, Best Publishing CoFlagstaff, AZJ. D. Stachiw, Handbook of Acrylics for Submersibles, Hyperbaric Chamber, and Aquaria. Best Publishing Co., Flagstaff, AZ, 2003. . J C Zwinkels, W F Davidson, C Dodd, Appl. Opt. 293240Zwinkels, J. C., Davidson, W.F. and Dodd, C.X, Appl. Opt. 29 3240 (1990). . Inc Schott North America, 555Elmsford, NY 10523 USASchott North America, Inc., 555 Taxter Rd., Elmsford, NY 10523 USA. http://www.us.schott.com/ . Inc Strainoptics, North Montomery Ave, Wales, PAStrainoptics, Inc., 108 W. Montomery Ave., North Wales, PA 19454 USA. http://www.strainoptic.com/
[]
[ "A Borel-Weil Theorem for the Quantum Grassmannians", "A Borel-Weil Theorem for the Quantum Grassmannians" ]
[ "Colin Mrozinski ", "Réamonnó Buachalla " ]
[]
[]
We establish a noncommutative generalisation of the Borel-Weil theorem for the Heckenberger-Kolb calculi of the quantum Grassmannians. The result is presented in the framework of noncommutative complex and holomorphic structures and generalises previous work of a number of authors on quantum projective space. As a direct consequence we get a novel holomorphic presentation of the twisted Grassmannian coordinate ring studied in noncommutative algebraic geometry.
null
[ "https://arxiv.org/pdf/1611.07969v3.pdf" ]
119,178,431
1611.07969
0397095bb6d2b1ca3c28b963f366b3360d8cf96f
A Borel-Weil Theorem for the Quantum Grassmannians 11 May 2017 Colin Mrozinski Réamonnó Buachalla A Borel-Weil Theorem for the Quantum Grassmannians 11 May 2017 We establish a noncommutative generalisation of the Borel-Weil theorem for the Heckenberger-Kolb calculi of the quantum Grassmannians. The result is presented in the framework of noncommutative complex and holomorphic structures and generalises previous work of a number of authors on quantum projective space. As a direct consequence we get a novel holomorphic presentation of the twisted Grassmannian coordinate ring studied in noncommutative algebraic geometry. Introduction The Borel-Weil theorem [43] is an elegant geometric procedure for constructing all unitary irreducible representations of a compact Lie group. The construction realises each representation as the space of holomorphic sections of a line bundle over a flag manifold. It is a highly influential result in the representation theory of Lie groups and since the discovery of quantum groups has inspired a number of noncommutative generalisations. Important examples include the homological algebra approaches of Polo, Andersen, and Wen [1], and of Parshall and Wang [42], the representation theoretic approach of Mimachi, Noumi, and Yamada [33,34], the coherent state approach of Biedenharn and Lohe [5], and the equivariant vector bundle approach of Gover and Zhang [16]. Moreover, the question of whether these examples can be understood in terms of a general Borel-Weil theorem for compact quantum groups is an important open problem [48,Problem 1.5]. In the classical setting the holomorphic sections of a line bundle over a flag manifold are the same as its parabolic invariant sections. In the above examples the parabolic invariant description is generalised without introducing any noncommutative notion of holomorphicity. In recent years, however, the study of differential calculi over the quantum flag manifolds C q [G 0 /L 0 ] has yielded a much better understanding of their noncommutative complex geometry. Subsequent work on the Borel-Weil theorem for quantum groups has used the notion of a complex structure on a differential * -calculus to generalise the Koszul-Malgrange presentation of holomorphic vector bundles [28]. This direction of research was initiated by Majid in his influential paper on the Podleś sphere [30]. It was continued by Khalkhali, Landi, van Suijlekom, and Moatadelro in [24,25,26] where the definitions of complex structure and noncommutative holomorphic vector bundle were introduced and the family of examples extended to include quantum projective space C q [CP n ]. The differential calculi used in the above work are those identified by Heckenberger and Kolb in their remarkable classification of calculi over the irreducible quantum flag manifolds [21]. In this paper we show that for the A-series irreducible quantum flag manifolds, which is to say the quantum Grassmannians, the Heckenberger-Kolb calculus Ω 1 q (Gr n,r ) has an associated q-deformed Borel-Weil theorem generalising the case of quantum projective space. This is done using the framework of quantum principal bundles together with a realisation of the Heckenberger-Kolb calculus as the restriction of a quotient of the standard bicovariant calculus on C q [SU n ]. This means that the paper is closer in form to [30,24] rather than [25,26], where Dabrowski and D'Andrea's spectral triple presentation of the quantum projective space calculus was used. In addition to the theory of quantum principal bundles, the paper uses two novel mathematical objects. The first is a principal C[U 1 ]-bundle over the quantum Grassmannians which generalises the well-known presentation of the odd-dimensional quantum spheres as the total space of a C[U 1 ]-bundle over quantum projective space. Just as for quantum projective space, this presents a workable description of the line bundles over C q [Gr n,r ]. The second is a sequence of twisted derivation algebras which serve as a crucial tool for demonstrating non-holomorphicity of line bundle sections in the proof of the Borel-Weil theorem. One of our main motivations for extending the Borel-Weil theorem to the quantum Grassmannians is to further explore the connections between quantum flag manifolds C q [G 0 /L 0 ] and their twisted homogeneous coordinate ring counterparts H q (G 0 /L 0 ) in noncommutative algebraic geometry [44,3,10]. These rings are deformations of the homogeneous coordinate rings of the classical flag manifolds and are important examples in the theory of quantum cluster algebras [18,17]. As was shown in [13], every quantum flag manifold can be constructed as a coinvariant subalgebra of H q (G 0 /L 0 ) ⊗ H q (G 0 /L 0 ) * , where H q (G 0 /L 0 ) * denotes the dual comodule of H q (G 0 /L 0 ). An important consequence of the holomorphic approach to the Borel-Weil theorem for C q [Gr n,r ] is that one can mimic the classical ample bundle presentation of H(Gr n,r ) to go in the other direction, that is, to construct the twisted homogeneous coordinate ring H q (Gr n,r ) from the quantum coordinate algebra C q [Gr n,r ]. This directly generalises the work of [24,25,26] for quantum projective space, which is an essential ingredient in the construction of the category of coherent sheaves over C q [CP n ] [40]. The paper naturally leads to a number of future projects. First is the Borel-Weil theorem for the C-series irreducible quantum flag manifolds and the full quantum flag manifold of C q [SU 3 ] [39]. Second is the formulation of a quantum Borel-Weil-Bott theorem for C q [Gr n,r ]. Third, the standard circle bundle introduced in §3 allows for a natural definition of weighted quantum Grassmannians generalising the definition of weighted quantum projective space [7]. This and the connections with Cuntz-Pimsner algebras [2] will be discussed in [41]. Finally, this paper is a first step towards the longer term goals of understanding the noncommutative Kähler geometry of C q [Gr n,r ] [37], constructing spectral triples for C q [Gr n,r ] [12], and defining the category of coherent sheaves [40] of C q [Gr n,r ]. The paper is organised as follows: In Section 2, we recall some well-known material about cosemisimple and coquasi-triangular Hopf algebras, Hopf-Galois extensions, principal comodule algebras, and quantum homogeneous spaces. We also recall the basics of the theory of differential calculi and quantum principal bundles, as well as the more recent notions of noncommutative complex and holomorphic structures. In Section 3, we recall the definition of the quantum Grassmannians C q [Gr n,r ] and introduce C q [S n,r ] the total space of the C[U 1 ]-circle bundle discussed above. We show that C q [S n,r ] is equal to the direct sum of the line bundles over C q [Gr n,r ], deduce a set of generators for C q [S n,r ], and give the defining coaction of the C[U 1 ]-bundle. Moreover, we construct an explicit strong principal connection for the bundle and discuss the special case of quantum projective space. In Sections 4 and 5, we use the quantum Killing form, and its associated bicovariant calculus, to construct a calculus Ω 1 q (SU n , r) which restricts to the Heckenberger-Kolb calculus on C q [Gr n,r ]. Moreover, Ω 1 q (SU n , r) is shown to induce a quantum principal bundle structure on the Hopf-Galois extensions C q [Gr n,r ] ֒→ C q [S n,r ] and C q [Gr n,r ] ֒→ C q [SU n ]. The universal principal connection introduced in Section 3 is then shown to restrict to a principal connection for the bundle, and the induced covariant holomorphic structures on the line bundles are shown to be the unique such structures. In Section 6, we prove the main result of the paper. H 0 (E k ) = V (r, k), H 0 (E −k ) = 0, k ∈ N 0 , where V (r, k) is the q-analogue of the irreducible corepresentation occurring in the classical Borel-Weil theorem. Finally, we use the theorem to give a novel presentation of H q (Gr n,r ) generalising the classical ample bundle presentation of the homogeneous coordinate ring. Acknowledgements: We Preliminaries The preliminaries are divided into three subsections. The first deals with Hopf algebra theory, the second deals with differential calculi and complex structures, and the third recalls the corepresentation theory of the quantum group C q [SU n ]. This reflects the subject of the paper which shows how, for the quantum Grassmannians, the interaction of Hopf-Galois theory and differential calculi gives a geometric realisation for a special class of corepresentations of C q [SU n ]. Preliminaries on Hopf Algebras and Hopf-Galois Extensions In this subsection we recall some well-known material about cosemisimple and coquasitriangular Hopf algebras, Hopf-Galois extensions, and principal comodule algebras, as well as Takeuchi's categorical equivalence for quantum homogeneous spaces. Cosemisimple and Coquasi-triangular Hopf Algebras Throughout the paper all algebras are assumed to be unital and defined over the complex numbers. The letters G or H will always denote Hopf algebras, all antipodes are assumed to be bijective, and we use Sweedler notation. Moreover, we denote g + := g − ε(g)1, for g ∈ G, and V + := V ∩ ker(ε), for V a subspace of G. For any left G-comodule (V, ∆ L ), its space of matrix elements is the coalgebra C(V ) := span C {(id ⊗ f )∆ L (v) | f ∈ Hom C (V, C), v ∈ V } ⊆ G. (The matrix coefficients of a right G-comodule are defined similarly.) A comodule is irreducible if and only if its coalgebra of matrix elements is irreducible, and C(V ) = C(W ) if and only if V is isomorphic to W . Furthermore, an isomorphism is given by (ev ⊗ id) • (id ⊗ ∆ R ) : V * ⊗ V → C(V ), for all V ∈ G,(1) where V * denotes the dual left G-comodule, ev : V * ⊗ V → C is the evaluation map, and G denotes the isomorphism classes of irreducible left G-comodules [27,Theorem 11.8]. Definition 2.1. A Hopf algebra G is called cosemisimple if G ≃ V ∈ G C(V ) . We call this decomposition the Peter-Weyl decomposition of G. We finish with another Hopf algebra structure of central importance in the paper. Definition 2.2. We say that a Hopf algebra G is coquasi-triangular if it is equipped with a convolution-invertible linear map r : G ⊗ G → C obeying, for all f, g, h ∈ G, the relations r(f g ⊗ h) = r(f ⊗ h (1) )r(g ⊗ h (2) ), r(f ⊗ gh) = r(f (1) ⊗ h)r(f (2) ⊗ g), g (1) f (1) r(f (2) ⊗ g (2) ) = r(f (1) ⊗ g (1) )f (2) g (2) , r(f ⊗ 1) = r(1 ⊗ f ) = ε(f ). Principal Comodule Algebras For H a Hopf algebra, and V a right H-comodule with coaction ∆ R , we say that an element v ∈ V is coinvariant if ∆ R (v) = v ⊗ 1. We denote the subspace of all coinvariant elements by V coH and call it the coinvariant subspace of the coaction. We define a coinvariant subspace of a left-coaction analogously. For a right H-comodule algebra P with multiplication m, its coinvariant subspace M := P coH is clearly a subalgebra of P . Throughout the paper we will always use M in this sense. We now recall a generalisation of the classical notion of an associated vector bundle. For V and W respectively a right, and left, H-comodule, the cotensor product of V and W is the vector space V H W := ker ∆ R ⊗ id − id ⊗ ∆ L : V ⊗ W → V ⊗ H ⊗ W . For P a right H-comodule algebra, P H V is clearly a left M -module. We call any such M -module an associated vector bundle, or simply an associated bundle. In the special case when V is 1-dimensional, we call the module an associated line bundle. If the mapping can := (m P ⊗ id) • (id ⊗ ∆ R ) : P ⊗ M P → P ⊗ H, is an isomorphism, then we say that P is a H-Hopf-Galois extension of M . We sometimes find it convenient to denote a Hopf-Galois extension by M ֒→ P without explicit reference to H. If the functor P ⊗ M − : M Mod → C Mod, from the category of left M -modules to the category of complex vector spaces, preserves and reflects exact sequences, then we say that P is faithfully flat as a right module over M . The definition of faithful flatness of P as a left M -module is analogous. Definition 2.3. A principal right H-comodule algebra is a right H-comodule algebra (P, ∆ R ) such that P is a Hopf-Galois extension of M := P coH and P is faithfully flat as a right and left M -module. We call P the total space and M the base. Directly verifying the conditions of this definition can in general be quite difficult. The following theorem gives a more practical reformulation. Faithful flatness of G implies ker(π) = M + G [32, Theorem 1.1] (see also [38,Lemma 4.2]), and so, π restricts to an isomorphism Φ(G) ≃ H. This in turn implies that an explicit inverse to the canonical map is given by (2) . can −1 : G ⊗ Φ(G) → G ⊗ M G, g ′ ⊗ [g] → g ′ S(g (1) ) ⊗ M g Hence, if G is faithfully flat as a module over a quantum homogeneous space M , then G is a Hopf-Galois extension of M , and so, it is a principal comodule algebra. Preliminaries on Complex and Holomorphic Structures In this subsection we give a concise presentation of the basic theory of differential calculi and quantum principal bundles. We also recall the more recent notions of complex and holomorphic structures as introduced in [24] and [4]. The definition of a holomorphic structure is motivated by the classical Koszul-Malgrange theorem [28] which establishes an equivalence between holomorphic structures on a vector bundle and flat ∂-operators on the smooth sections of the vector bundle. A more detailed discussion of differential calculi and complex structures can be found in [36]. First-Order Differential Calculi A first-order differential calculus over a unital algebra A is a pair (Ω 1 , d), where Ω 1 is an A-A-bimodule and d : A → Ω 1 is a linear map for which the Leibniz rule holds We say that a first-order calculus is connected if ker(d) = C1. The universal first-order differential calculus over A is the pair (Ω 1 u (A), d u ), where Ω 1 u (A) is the kernel of the product map m : A ⊗ A → A endowed with the obvious bimodule structure, and d u is defined by d u : A → Ω 1 u (A), a → 1 ⊗ a − a ⊗ 1. By [49, Proposition 1.1] there exists a surjective morphism from Ω 1 u (A) onto any other calculus over A. We say that a first-order differential calculus Ω 1 (M ) over a quantum homogeneous space M is left-covariant if there exists a (necessarily unique) left G-coaction is considered as an object in H Mod M in the obvious way) such that an isomorphism is given by ∆ L : Ω 1 (M ) → G ⊗ Ω 1 (M ) such that ∆ L (mdn) = ∆(m)(id ⊗ d)∆(n),Φ(Ω 1 (M )) → M + /I =: V M , [dm] → [m + ]. Moreover, this association defines a bijection between covariant first-order calculi and sub-objects of M + . We tacitly identify Φ(Ω 1 (M )) and M + /I throughout the paper, and use the well-known formula dg = g (1) ⊗ g + (2) for the implied map d : G → G H V M . Finally, let us consider the case of the trivial quantum homogeneous space, that is, the quantum homogeneous space corresponding to the Hopf algebra map ε : G → C. Here we also have an obvious notion of right covariance for a calculus with respect to a (necessarily unique) right G-coaction ∆ R . If a calculus is both left and right-covariant and satisfies (id ⊗ ∆ R ) • ∆ L = (∆ L ⊗ id) • ∆ R , then we say that it is bicovariant. It was shown in [49, Theorem 1.8] that a left-covariant calculus is bicovariant if and only if the corresponding ideal I is a subcomodule of G with respect to the (right) adjoint coaction Ad : G → G ⊗ G, defined by Ad(g) := g (2) ⊗ S(g (1) )g (3) . Finally, when considering covariant calculi over Hopf algebras we use the notation Λ 1 := G + /I.(3) Differential Calculi A graded algebra A = k∈N 0 A k , together with a degree 1 map d, is called a differential graded algebra if d is a graded derivation, which is to say, if it satisfies the graded Leibniz rule d(αβ) = (dα)β + (−1) k αdβ, for all α ∈ A k , β ∈ A. The operator d is called the differential of the differential graded algebra. Definition 2.5. A differential calculus over an algebra A is a differential graded algebra (Ω • , d) such that Ω 0 = A and Ω k = span C {a 0 da 1 ∧ · · · ∧ da k | a 0 , . . . , a k ∈ A}, where ∧ denotes multiplication in Ω • . We say that a differential calculus (Γ • , d Γ ) extends a first-order calculus (Ω 1 , d Ω ) if there exists an isomorphism ϕ : (Ω 1 , d Ω ) → (Γ 1 , d Γ ). It can be shown that any first-order calculus admits an extension Ω • which is maximal in the sense that there there exists a unique morphism from Ω • onto any other extension of Ω 1 , where the definition of differential calculus morphism is the obvious extension of the first-order definition [36, §2.5]. We call this extension the maximal prolongation of the first-order calculus. Complex Structures We call a differential calculus (Ω • , d) over a * -algebra A a differential * -calculus if the involution of A extends to an involutive conjugate-linear map on Ω • , for which (dω) * = dω * , for all ω ∈ Ω • , and (ω k ∧ ω l ) * = (−1) kl ω * l ∧ ω * k , for all ω k ∈ Ω k , ω l ∈ Ω l . Definition 2.6. An almost complex structure for a differential * -calculus Ω • is an N 2 0 -algebra grading (a,b)∈N 2 0 Ω (a,b) for Ω • such that, for all (a, b) ∈ N 2 0 , 1. Ω k = a+b=k Ω (a,b) , 2. (Ω (a,b) ) * = Ω (b,a) . Let ∂ and ∂ be the unique homogeneous operators of order (1, 0), and (0, 1) respectively, defined by ∂| Ω (a,b) := proj Ω (a+1,b) • d, ∂| Ω (a,b) := proj Ω (a,b+1) • d, where proj Ω (a+1,b) , and proj Ω (a,b+1) , are the projections from Ω a+b+1 onto Ω (a+1,b) , and Ω (a,b+1) respectively. Definition 2.7. A complex structure is an almost complex structure for which d = ∂+∂, or equivalently, for which ∂ 2 = ∂ 2 = 0. The opposite almost-complex structure of an almost-complex structure Ω (•,•) is the N 2 0 - algebra grading Ω (•,•) , defined by setting Ω (a,b) := Ω (b,a) . Note that the * -map of the calculus sends Ω (a,b) to Ω (a,b) and vice-versa. Moreover, it is clear that an almost-complex structure is integrable if and only if its opposite almost-complex structure is integrable. If G and H are Hopf * -algebras and π : G → H is a * -algebra map, then M = G coH is clearly a * -algebra. We say that a complex structure on a differential * -calculus over M is covariant if the decomposition in Definition 2.6.1 is a decomposition in the category G M Mod M . Connections and Holomorphic Structures For F a left module over an algebra A and Ω • (A) a differential calculus over A, a connection for F is a C-linear mapping ∇ : F → Ω 1 (A) ⊗ A F such that ∇(af ) = a∇(f ) + da ⊗ A f, f ∈ F, a ∈ A. For any connection ∇ : F → Ω 1 ⊗ A F, setting ∇(ω ⊗ A f ) := dω ⊗ A f + (−1) k ω ∧ ∇(f ), f ∈ F, ω ∈ Ω k , defines an extension of ∇ to a C-linear map ∇ : Ω • ⊗ A F → Ω • ⊗ A F. We say that ∇ is flat if the curvature operator ∇ 2 : F → Ω 2 ⊗ A F is the zero map. It is easily checked that flatness of ∇ implies that ∇ : Ω • ⊗ A F → Ω • ⊗ A F is a complex. Note that the curvature operator is always a left A-module map, as is the difference of any two connections. Definition 2.8. For Ω • (A) a * -calculus endowed with a choice of complex structure, a ∂-operator for an A-module F is a connection for the calculus Ω (0,•) . A holomorphic structure for F is a flat ∂-operator. Since a complex structure is a bimodule decomposition, composing a connection for F with the projection Π (0,1) : Ω 1 ⊗ A F → Ω (0,1) ⊗ A F gives a ∂-operator for F. For a quantum homogeneous space M := G co(H) , a connection for F ∈ G M Mod M is said to be covariant if it is a left G-comodule map. For a covariant complex structure, a covariant connection will clearly project to covariant holomorphic structure. Quantum Principal Bundles and Principal Connections For a right H-comodule algebra (P, ∆ R ) with M := P co(H) , it can be shown [8, Proposition 3.6] that M ֒→ P is a Hopf-Galois extension if and only if an exact sequence is given by 0 −→ P Ω 1 u (M )P ι −→Ω 1 u (P ) ver −→P ⊗ H + −→ 0,(4) where Ω 1 u (M ) is the restriction of Ω 1 u (P ) to M , ι is the inclusion map, and ver is the restriction of can to Ω 1 u (P ). The following definition presents sufficient criteria for the existence of a non-universal version of this sequence. Definition 2.9. A quantum principal H-bundle is a Hopf-Galois extension M = P co(H) together with a sub-bimodule N ⊆ Ω 1 u (P ) which is coinvariant under the right H-coaction ∆ R and for which ver(N ) = G ⊗ I, for some Ad-subcomodule right ideal I ⊆ H + . Denote by Ω 1 (P ) the first-order calculus corresponding to N , denote by Ω 1 (M ) the restriction of Ω 1 (P ) to M , and finally as a special case of 3, we denote Λ 1 H := H + /I. The quantum principal bundle definition implies [19] that an exact sequence is given by 0 −→ P Ω 1 (M )P ι −→Ω 1 (P ) ver −→P ⊗ Λ 1 H −→ 0.(5) A principal connection for a quantum principal H-bundle M ֒→ P is a right H-comodule, left P -module, projection Π : Ω 1 (P ) → Ω 1 (P ) such that ker(Π) = P Ω 1 (M )P . A princi- pal connection Π is called strong if (id − Π) dP ⊆ Ω 1 (M )P . Takeuchi's equivalence implies that for any homogeneous space endowed with a quantum principal bundle structure, left G-covariant principal connections are in bijective correspondence with right H-comodule complements to V M G in Λ 1 . Explicitly, for such a complement K the corresponding principal connection is given by Π : G ⊗ Λ 1 → G ⊗ Λ 1 , g ⊗ v → g ⊗ proj K (v), where proj K : Λ 1 → K denotes the obvious connection. For any associated vector bundle F = P H V , let us identify Ω 1 (M ) ⊗ M F with its canonical image in Ω 1 (M )P ⊗ V . A strong principal connection Π induces a connection ∇ on F by ∇ : F → Ω 1 (M ) ⊗ M F, i g i ⊗ v i → (id − Π) ⊗ id i dg i ⊗ v i .(6) The following theorem shows that strong connections and principal comodule algebras have an intimate relationship. 1. For ℓ : H → P ⊗ P a principal ℓ-map, a strong principal connection is defined by Π ℓ := (m P ⊗ id) • (id ⊗ ℓ) • ver : Ω 1 u (P ) → Ω 1 u (P ).(7) 2. This defines a bijective correspondence between principal ℓ-maps and strong principal connections. The Quantum Group C q [SU n ] and its Corepresentations In this subsection we recall the definitions of the quantum groups C q [U n ] and C q [SU n ], as well as the coquasi-triangular structure of the latter. The definition of the quantised enveloping algebra U q (sl n ) is then presented, along with its dual pairing with C q [SU n ]. Finally, Noumi, Yamada, and Mimachi's quantum minor presentation of the corepresentation theory of C q [SU n ] is recalled. Where proofs or basic details are omitted we refer the reader to [27, §9.2] and [33, 34]. The Quantum Groups C q [U n ] and C q [SU n ] For q ∈ R >0 , let C q [GL n ] be unital complex algebra generated by the elements u i j , det −1 n , for i, j = 1, . . . , n satisfying the relations u i k u j k = qu j k u i k , u k i u k j = qu k j u k i , 1 ≤ i < j ≤ n; 1 ≤ k ≤ n, u i l u j k = u j k u i l , u i k u j l = u j l u i k + (q − q −1 )u i l u j k , 1 ≤ i < j ≤ n; 1 ≤ k < l ≤ n, det n det −1 n = 1, det −1 n det n = 1, where det n , the quantum determinant, is the element det n := σ∈Sn (−q) ℓ(σ) u 1 σ(1) u 2 σ(2) · · · u n σ(n) , with summation taken over all permutations σ of the set {1, . . . , n}, and ℓ(σ) is the number of inversions in σ. As is well known [27, §9.2.2], det n is a central element of C q [GL n ]. A bialgebra structure on C q [GL n ], with coproduct ∆ and counit ε, is uniquely determined by ∆(u i j ) := n k=1 u i k ⊗ u k j ; ∆(det −1 n ) := det −1 n ⊗ det −1 n ; ε(u i j ) := δ ij ; and ε(det −1 n ) = 1. Moreover, we can endow C q [GL n ] with a Hopf algebra structure by defining S(det −1 n ) := det n , S(u i j ) := (−q) i−j σ∈S n−1 (−q) ℓ(σ) u k 1 σ(l 1 ) u k 2 σ(l 2 ) · · · u k n−1 σ(l n−1 ) det −1 n , where {k 1 , . . . , k n−1 } := {1, . . . , n}\{j}, and {l 1 , . . . , l n−1 } := {1, . . . , n}\{i} as ordered sets. A Hopf * -algebra structure is determined by (det −1 n ) * = det n , and (u i j ) * = S(u j i ). We denote the Hopf * -algebra by C q [U n ], and call it the quantum unitary group of order n. We denote the Hopf * -algebra C q [U n ]/ det n −1 by C q [SU n ], and call it the quantum special unitary group of order n. A coquasi-triangular structure for C q [SU n ] is defined by r(u i j ⊗ u k l ) := q − 1 n q δ ik δ ij δ kl + (q − q −1 )θ(i − k)δ il δ kj , i, j, k, l = 1, . . . , n, where θ is the Heaviside step function [27,Theorem 10.9]. From r we can produce a family of linear maps Q ij : C q [SU n ] → C, g → n a=1 r(u i a ⊗ g (1) )r(g (2) ⊗ u a j ), for i, j = 1, . . . , n. The quantum Killing form of r is the linear map Q : C q [SU n ] → M n (C), h → [Q ij (h)] (ij) . It is easily seen that ker (Q) + is a right ideal of C q [SU n ] + . Moreover, it is an Ad- subcomodule of C q [SU n ] + [27, §10.1.3]. Hence there exists an associated bicovariant calculus Ω 1 q,bc (SU n ) which we call the standard bicovariant calculus. The Quantised Enveloping Algebra U q (sl n ) Recall that the Cartan matrix of sl n is the array a ij = 2δ ij − δ i+1,j − δ i,j+1 , where i, j = 1, . . . , n − 1. The quantised enveloping algebra U q (sl n ) is the noncommutative algebra generated by the elements E i , F i , K i , K −1 i , for i = 1, . . . , n − 1, subject to the relations K i K j = K j K i , K i K −1 i = K −1 i K i = 1, K i E j K −1 i = q a ij E j , K i F j K −1 i = q −a ij F j , E i F j − F j E i = δ ij K i − K −1 i (q − q −1 ) , along with the quantum Serre relations which we omit (see [27, §6.1.2] for details). A Hopf algebra structure is defined on U q (sl n ) by setting ∆(K ±1 i ) = K ±1 i ⊗ K ±1 i , ∆(E i ) = E i ⊗ K i + 1 ⊗ E i , ∆(F i ) = F i ⊗ 1 + K −1 i ⊗ F i S(E i ) = −E i K −1 i , S(F i ) = −K i F i , S(K i ) = K −1 i , ε(E i ) = ε(F i ) = 0, ε(K i ) = 1. A non-degenerate dual pairing between C q [SU n ] and U q (sl n ) is uniquely defined by K i , u i i = q −1 , K i−1 , u i i = q, K j , u i i = 1, for j = i, E i , u i+1 i = 1, F i , u i i+1 = 1, with all other pairings of generators being zero. Quantum Minor Determinants and Irreducible Corepresentations For I := {i 1 , . . . , i r } and J := {j 1 , . . . , j r } two subsets of {1, . . . , n}, the associated quantum minor z I J is the element z I J := σ∈Sr (−q) ℓ(σ) u σ(i 1 ) j 1 · · · u σ(ir ) jr = σ∈Sr (−q) ℓ(σ) u i 1 σ(j 1 ) · · · u ir σ(jr) . Note that when I = J = {1, . . . , n} we get back the determinant. For I, J ⊆ {1, . . . , n} with |I| = r and |J| = n − r, and denoting R := {1, . . . , r}, we adopt the conventions z I := z I R , z J := z J R c , z := z R R , z := z R c R c , where R c denotes the complement to R in {1, . . . , n}. The * -map of C q [SU n ] acts on quantum minors as (z I J ) * = S(z J I ) = (−q) ℓ(J,J c )−ℓ(I,I c ) z I c J c ,(8)∆(z I J ) = K z I K ⊗ z K J ,(9) where summation is over all ordered subsets K ⊆ {1, . . . , n} with |I| = |J| = |K| [33, §1.2]. A Young diagram is a finite collection of boxes arranged in left-justified rows, with the row lengths in non-increasing order. Young diagrams with p rows are clearly equivalent to dominant weights of order p, which is to say elements λ = (λ 1 , . . . , λ p ) ∈ N p , such that λ 1 ≥ · · · ≥ λ p . We denote the set of dominant weights of order p by Dom(p). A semi-standard tableau of shape λ with labels in {1, . . . , n} is a collection T = {T r,s } r,s of elements of {1, . . . , n} indexed by the boxes of the corresponding Young diagram, and satisfying, whenever defined, the inequalities T r−1,s < T r,s , T r,s−1 ≤ T r,s . We denote by SSTab n (λ) the set of all semi-standard tableaux for a given dominant weight λ. For a Young diagram with n − 1 rows, the standard monomial associated to a semistandard tableau T is z T := z T 1 · · · z T λ 1 ∈ C q [SU n ], where T s := {T 1,s , . . . , T ms,s } as an ordered set, for 1 ≤ s ≤ λ 1 , and m s is the length of the s th column. It can be shown that the elements z T are linearly independent, that the space V L (λ) := span C {z T | T ∈ SSTab n (λ)} is an irreducible left C q [SU n ]-comodule, and that every irreducible left comodule of C q [SU n ] is isomorphic to V (λ) for some λ (see [33, §2] and [34] for details). Similarly, [34]. for z T := z T 1 · · · z T λ 1 ∈ C q [SU n ], the space V R (λ) := span C {z T | T ∈ SSTab n (λ)} is an irreducible right C q [SU n ]-comodule and all irreducible right C q [SU n ]-comodules are of this form. Moreover, C q [SU n ] is cosemisimple Using the dual pairing between U q (sl n ) and C q [SU n ], every left comodule V (λ) can be given a left U q (sl n )-module action ⊲ in the usual way. The basis vectors z T are all weight vectors, which is to say, for any dominant weight λ and any T ∈ SSTab n (λ), we have K i ⊲ z T = q λ i z T , for some λ i ∈ Z. We call (λ 1 , . . . , λ n−1 ) the weight of z T . With respect to the natural ordering on weights, it is clear that each V (λ) has a vector of highest weight which is unique up to scalar multiple. Moreover, the irreducible corepresentations of C q [SU n ] are uniquely identified by their highest weights. The corresponding statements for right comodules also hold. The Hopf algebra C q [U n ] is also cosemisimple [33, §Theorem 2.11]. Its irreducible comodules are indexed by pairs (λ, k), where λ identifies a Young diagram with n − 1 rows and k ∈ Z. Each comodule has a concrete realisation in terms of the standard monomials of the tableaux for λ (defined in analogy with the C q [SU n ] case) multiplied by det k . With respect to the dual pairing between C q [U n ] and the quantised enveloping algebra U q (gl n ), there is an obvious analogous notion of weights. Moreover, the irreducible comodules are classified by their highest weights [33, §1.4, §2.2]. Some Identities In this subsection we recall two families of technical identities which prove very useful useful throughout the paper. The first is expressed in terms of the following notation: for I ⊆ {1, . . . , n}, and i, j ∈ {1, . . . , n}, I ij := I\{i} ∪ {j} if i = j and i ∈ I I if i = j. For i = j, when i / ∈ I or j ∈ I we do not assign a direct meaning to the symbol I ij , however we do denote z We now recall the a q-deformation of the classical Laplace expansion of a matrix minor. (−q) ℓ(J 1 ,J c 1 ) z I J = I 1 (−q) ℓ(I 1 ,I c 1 ) z I 1 J 1 z I c 1 J c 1 ,(10) where summation is over all subsets I 1 ⊆ I such that |I 1 | = |J 1 |. Remark 2.13 It should be noted that the proof of Proposition 2.11 referred to above is for the bialgebra C q [M n ] endowed with the coquasi-triangular structure unscaled by the factor q − 1 n . However, the given identities are directly implied by this result. Moreover, the proof gives exact values are given for r(u i j , z I J ) and r(z I J , u i j ) in the non-zero cases. Three Principal Comodule Algebras In this section we recall the definition of the quantum Grassmannian C q [Gr n,r ] as the quantum homogeneous space associated to a certain Hopf algebra map from C q [SU n ] onto C q [SU r ] ⊗ C q [U n−r ] as introduced in [31]. Moreover, we introduce a novel quantum homogeneous space associated to a Hopf algebra map C q [SU n ] → C q [SU r ] ⊗ C q [U n−r ]. We then use this homogeneous space to give a workable description of the line bundles over C q [Gr n,r ] central to our later proof of the Borel-Weil theorem below. Finally, we construct the standard circle bundle over C q [Gr n,r ] which generalises the relationship between the odd-dimensional quantum spheres and quantum projective space to the Grassmannian setting. Quantum Grassmannians The quantum Grassmannians are defined in terms of three Hopf algebra maps which we now recall. Let α r : C q [SU n ] → C q [U r ] be the surjective Hopf * -algebra map defined by α r (u n n ) = det −1 r , α r (u i j ) = δ ij 1, for i, j = 1, . . . , n; (i, j) / ∈ R × R, α r (u i j ) = u i j , for (i, j) ∈ R × R Moreover, let β r : C q [SU n ] → C q [SU n−r ] be the surjective Hopf * -algebra map β r (u i j ) = δ ij 1, for i, j = 1, . . . , n; (i, j) / ∈ R c × R c , β r (u i j ) = u i−r j−r , for (i, j) ∈ R c × R c . Definition 3.1. The quantum Grassmannian C q [Gr n,r ] is the quantum homogeneous space associated to the Hopf * -algebra surjection π n,r : C q [SU n ] → C q [U r ] ⊗ C q [SU n−r ], π n,r := (α r ⊗ β r ) • ∆. Since C q [SU r ] ⊗ C q [U n−r ] is the product of two cosemisimple Hopf algebras it is itself cosemisimple, and so, as discussed in §2.1.3 the extension C q [Gr n,r ] ֒→ C q [SU n ] is a principal comodule algebra. We should also recall the alternative description of C q [Gr n,r ] in terms of the quantised enveloping algebra U q (sl n ). Consider the subalgebra U q (l r ) ⊆ U q (sl n ) generated by the elements {E i , F i , K j | i, j = 1, . . . , n − 1; i = r}, We omit the proof of the following lemma which is a straightforward technical exercise. Lemma 3.2 It holds that C q [Gr n,r ] = {g ∈ C q [SU n ] | g ⊳ l = ε(l)g, for all l ∈ U q (l r )}. We now present a set of generators for the algebra C q [Gr Theorem 3.3 The quantum Grassmannian C q [Gr n,r ] is generated as an algebra by the elements {z IJ := z I z J | |I| = r, |J| = n − r}. Example 3.4. For the special case of r = 1, we see that C q [SU 1 ] ≃ C, and π n,r reduces to α 1 . Hence the associated quantum homogeneous space is quantum projective space C q [CP n ] as introduced in [31]. In the special case n = 1, this reduces to the standard Podleś sphere. As is easily seen, the algebra C q [Gr n,r ] is isomorphic to C q −1 [Gr n,n−r ] under the restriction of the map ϕ : C q [SU n ] → C q −1 [SU n ] defined by ϕ(u i j ) := u n−i+1 n−j+1 . This isomorphism generalises the classical isomorphism of Grassmannians corresponding to the symmetry of the Dynkin diagram of sl n . As is readily checked, the isomorphism extends to all our later constructions, and so, for convenience we restrict to the case of r ≥ n − r. The Quantum Homogeneous Space C q [S n,r ] To define the quantum homogeneous space C q [S n,r ], we need to introduce another Hopf algebra map. Let α ′ n : C q [SU n ] → C q [SU r ] be the surjective Hopf * -algebra map defined by setting α ′ r (u i j ) = δ ij 1, for i, j = 1, . . . , n; (i, j) / ∈ R × R, α ′ r (u i j ) = u i j , for (i, j) ∈ R × R. Definition 3.5. We denote by C q [S n,r ] the quantum homogeneous space associated to the surjective Hopf * -algebra map σ n,r : C q [SU n ] → C q [SU r ] ⊗ C q [SU n−r ] for σ n,r := (α ′ r ⊗ β r ) • ∆. Just as for the quantum Grassmannians, we present an alternative quantised enveloping algebra version of the definition. Denote by U q (k r ) ⊆ U q (sl n ) the subalgebra of U q (sl n ) generated by the set of elements {E i , F i , K i | i = 1, . . . , n − 1}. An analogous result to Lemma 3.2 holds in this case. We omit the proof which is again a technical exercise. Lemma 3.6 It holds that C q [S n,r ] = {g ∈ C q [SU n ] | g ⊳ k = ε(k)g, for all k ∈ U q (k r )}. Example 3.7. For the case of r = 1, the map σ n,r reduces to β 1 , and so, the associated quantum homogeneous space is the well known quantum sphere C q [S 2n+1 ] of Vaksmann and Soǐbel'man [35,31,45], which in the case n = 1 reduces to C q [SU 2 ]. Line Bundles over the Quantum Grassmannians Since V k → V k ⊗ C q [U r ] ⊗ C q [SU n−r ], v → v ⊗ det k r ⊗1, k ∈ Z. We denote the line bundle corresponding to V k by E −k , and identify it with its canonical image in C q [SU n ]. Lemma 3.8 It holds that C q [S n,r ] ≃ k∈Z E k . Moreover, this decomposition is an algebra grading of C q [S n,r ]. Proof. For proj : C q [U n−1 ] → C q [SU n ] the canonical projection, the definitions of π n,r and σ n,r imply commutativity of the diagram C q [SU n ] πn,r / / σn,r + + ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ ❲ C q [U r ] ⊗ C q [SU n−r ] proj⊗id C q [SU r ] ⊗ C q [SU n−r ]. For W an irreducible direct summand in the Peter-Weyl decomposition of C q [U n ], and v ∈ W ∩ C q [S n,r ], it holds that ∆ πn,r (v) ∈ W ⊗ α r (W ) ⊗ 1. A basic weight argument will confirm that α r (W ) is irreducible as a coalgebra, and so, proj α r (W ) = C1 if only if dim(W ) = 1. It follows from cosemisimplicity of C q [U n ], and the classification of its irreducible comodules in §2. 3.3, that all such sub-coalgebras are of the form C det k n−1 , for k ∈ Z. Thus, we see that v ∈ C q [S n,r ] only if it is contained in k∈Z E k . Moreover, since C q [S n,r ] is clearly homogeneous with respect to to the Peter-Weyl decomposition, we can conclude that C q [S n,r ] ⊆ k∈Z E k . Since the opposite inclusion is clear, we have the required vector space decomposition C q [S n,r ] ≃ k∈Z E k . Finally, the fact that ∆ πn,r is an algebra map implies that this decomposition is an algebra grading. Moreover, z I ∈ E 1 and z J ∈ E −1 , and E 0 = C q [Gr n,r ]. Proof. Note first that ∆ πn,r (z I ) = z I A ⊗ α r (z A B ) ⊗ β r (z B ) = z I A ⊗ α r (z A ) ⊗ β r (z) = z I ⊗ α r (z) ⊗ 1 = z I ⊗ det r ⊗1. An analogous calculation shows that ∆ πn,r (z J ) = z J ⊗ det −1 n−r ⊗1. Thus we see that z I ∈ E 1 , z J ∈ E −1 , and the algebra generated by z I and z J is contained in C q [S n,r ]. Moreover, this algebra is homogeneous with respect to the decomposition C q [S n,r ] ≃ κ∈Z E k . We denote the space of homogeneous elements of degree k by E k . It follows from Theorem 3.3 and (9) that E k is a subobject of E k in the category G M Mod M . Hence, since the latter is clearly irreducible, we must have E k = E k . It now follows that C q [S n,r ] is generated by the elements z I and z J . The fact that E 0 = C q [Gr n,r ] is simply a restatement of Theorem 3.3. Moreover, we recover the well-known description of C q [S 2n−1 ] as the direct sum of the line bundles over C q [CP n ]. The Standard Circle Bundle Corresponding to the algebra grading C q [S n,r ] ≃ E k , we have a right C[U 1 ]-coaction ∆ R : C q [S n,r ] → C q [S n,r ] ⊗ C[U 1 ] which acts as ∆ R (z I ) = z I ⊗ t and ∆ R (z J ) = z J ⊗ t −1 . With respect to ∆ R we clearly have C q [S n,r ] co(C[U 1 ]) = C q [Gr n,r ]. As is well known, for the special case of r = 1 the extension C q [CP n ] ֒→ C q [S 2n−1 ] is a principal comodule algebra with C[U 1 ]-fibre. The following proposition shows that this this fact extends to all values of r. Proposition 3.11 A principal ℓ-map ℓ : C[U 1 ] → C q [S n,r ] ⊗ C q [S n,r ] is defined by t k → S z k (1) ⊗ z k (2) , t −k → S z k (1) ⊗ z k (2) , k ∈ N 0 . Hence, C q [Gr n,r ] ֒→ C q [S n,r ] is a principal C[U 1 ]-comodule algebra, which we call the standard circle bundle of C q [Gr n,r ]. Proof. We begin by showing that ℓ is well defined, which is to say that its image is contained in C q [S n,r ] ⊗ C q [S n,r ]. Note first that S z k (1) ⊗ z k (2) = K 1 ,...,K k S(z R K k ) · · · S(z R K 1 ) ⊗ z K 1 · · · z K k .(12) It follows from (8) that each S(z R K i ) is proportional to z K c i . Hence, for k ≥ 0, we have that ℓ(t k ) ∈ C q [S 2n−1 ] ⊗ C q [S 2n−1 ] . The case of k < 0 is established analogously. We now show that the requirements of Theorem 2.4 are satisfied. It is obvious that conditions 1 and 2 hold. For k ≥ 0, condition 3 follows from (ℓ ⊗ id C[U 1 ] ) • ∆ C[U 1 ] (t k ) = ℓ(t k ) ⊗ t k = S(z k (1) ) ⊗ z k (2) ⊗ t k = (id Cq[SUn] ⊗ ∆ R ) S(z k (1) ) ⊗ z k (2) = (id Cq[SUn] ⊗ ∆ R ) ℓ(t k ) . The fourth condition of Theorem 2.4 is demonstrated analogously, as are both conditions for the case of k < 0. We now recall the definition of a cleft comodule algebra, which can be viewed as a noncommutative generalisation of a trivial bundle. Proof. For r ≥ 2, by looking at the action of α 1 on the generators z I and z J , it is easily seen that α 1 restricts to a map C q [S n,r ] → C q [S n−1,r−1 ] which preserves the Z-grading, or equivalently, is a right C[U 1 ]-comodule map. Moreover, α ′ 1 restricts to a right C[U 1 ]-comodule map C q [S 2n−1 ] → C q [S 2n−3 ], for n ≥ 3. By taking appropriate compositions of these maps one can produce a right C[U 1 ]-comodule map f : C q [S n,r ] → C q [S 3 ] ≃ C q [SU 2 ]. Assume now that there exists a convolution-invertible right C[U 1 ]-comodule map j : C q [U 1 ] → C q [S n,r ] with convolution inverse j. Since f is clearly a unital algebra map, it would hold that Remark 3.14 In [7] it was observed that the classical construction of weighting projective space (through a weighting of the U 1 -coaction on S 2n−1 ) can be directly generalised to the quantum setting. This construction can be directly extended to the C[U 1 ]-coaction on C q [S n,r ] considered above allowing one to define quantum weighted Grassmannians. This will be discussed in greater detail in [41]. f • j(t k ) f • j(t k ) = f j(t k )j(t k ) = f (1) = 1 = ε(t k ), for all k ∈ Z, implying that f • j : C[U 1 ] → C q [SU 2 ] is convolution-invertible, Remark 3.15 It was shown in [2] that for a quantum principal C[U 1 ]-comodule algebra M ֒→ P , admitting a suitable C * -completion M ֒→ P , the C * -algebra P is a Cuntz-Pimsner algebra over M . The compact quantum group completion of the extension C q [Gr n,r ] ֒→ C q [S n,r ] is easily seen to satisfy the required conditions, and so, C q [S n,r ] is a Cuntz-Pimsner algebra over C q [Gr n,r ]. This will be discussed in greater detail in [41]. A Restriction Calculus Presentation of Ω 1 q (Gr n,r ) In this subsection we generalise the work of [35] for the case of quantum projective space and realise the Heckenberger-Kolb calculus of C q [Gr n,r ] as the restriction of a quotient of the standard bicovariant calculus over C q [SU n ]. This allows us to present the calculus as the base calculus of a quantum principal bundle. The universal principal connection, associated to the ℓ-map considered in §3.3.2, is shown to descend to this quotient and to induce covariant holomorphic structures on the line bundles of C q [Gr n,r ]. The Heckenberger-Kolb Calculus for the Quantum Grassmannians We give a brief presentation of the Heckenberger-Kolb calculus of the quantum Grassmannians starting with the classification of first-order differential calculi over C q [Gr n,k ]. Recall that a first-order calculus is called irreducible if it contains no non-trivial subbimodules. Theorem 4.1 [21, §2] There exist exactly two non-isomorphic irreducible left-covariant first-order differential calculi of finite dimension over C q [Gr n,k ]. The direct sum of Ω (1,0) and Ω (0,1) is a * -calculus which we call it the Heckenberger-Kolb calculus of C q [Gr n,k ] and which we denote by Ω 1 q (Gr n,r ). Lemma 4.2 [22] Denoting by Ω • q (Gr n,r ) = k∈N 0 Ω k q (Gr n,r ) the maximal prolongation of Ω 1 q (Gr n,r ), each Ω k q (Gr n,r ) has classical dimension, which is to say, dim Ω k q (Gr n,r ) = 2r(n − r) k , k = 0, . . . , 2r(n − r), and Ω k q (Gr n,r ) = 0, for k > 2r(n − r). Moreover, the decomposition Ω 1 q (Gr n,r ) into its irreducible sub-calculi induces a pair of opposite complex structures for the total calculus of Ω 1 (Gr n,r ). A Family of Quotients of the Standard Bicovariant Calculus on C q [SU n ] We begin by recalling the n 2 -dimensional basis for the calculus, as originally constructed in [35,Lemma 4.2]. Lemma 4.3 For q = 1, a basis of Φ(Ω 1 q,bc (SU n )) is given by b ij := q 2 n ν −1 [u i j ], b ii := −q 2(i−1) ν −2(1−δ i1 ) [u i 1 S(u 1 i )], i, j = 1, . . . , n; i = j. Moreover, a dual basis is given by the functionals b ij := Q ji , b ii := Q ii . It is easily shown that the restriction of the standard bicovariant calculus Ω 1 q,bc (SU n ) to the quantum Grassmannians has non-classical dimension, and so, it cannot be isomorphic to the Heckenberger-Kolb calculus. We circumvent this problem by constructing a certain family of quotients of Ω 1 q,bc (SU n ). We omit the proof of the lemma which is a direct generalisation of [35,Lemma 4.3]. Lemma 4.4 A right submodule of Φ Ω 1 q,bc (SU n ) is given by V r := span C {b ij | i, j = r + 1, . . . , n}. Hence, denoting by B r the sub-bimodule of Ω 1 q,bc (SU n ) corresponding to V r , a differential calculus is given by the quotient Ω 1 q (SU n , r) := Ω 1 q,bc (SU n )/B r . We denote its corresponding ideal in C q [SU n ] by I Cq [SUn] . Finally, we come to right covariance of the calculus, as defined in Definition 2.9. We omit the proof which is an standard argument using the corepresentation theory of C q [SU n ] and C q [U n ] presented in §2.3.3. We note that the lemma can also be proved using a direct generalisation of [35,Lemma 4.3]. Lemma 4.5 The calculus Ω 1 q (SU n , r) is right C q [SU r ] ⊗ C q [U n−r ]-covariant. 4.3 The Action of Q on the Generators of C q [S n,r ] and C q [Gr n,r ] In this subsection we establish necessary conditions for non-vanishing of the functions Q ij on the generators of C q [S n,r ] and C q [Gr n,r ]. These results are used in the next subsection to describe the restriction of Ω 1 q (SU n , r) to the quantum Grassmannians. Proof. By Goodearl's formulae Q ij (z I J ) = A n a=1 r(u i a ⊗ z I A )r(z A J ⊗ u a j ) = n a=1 r(u i a ⊗ z I J aj )r(z J aj J ⊗ u a j ). Now r(u i a ⊗ z I J aj ) gives a non-zero answer only if I ai = J aj , or equivalently only if I = J ij , or equivalently only if J = I ji . Hence Q ij (z I J ) gives a non-zero answer only in the stated cases. 2. It follows from Proposition 2.11 that, for i = 1, . . . , r, Q ii (z) = n a=1 r(u i a ⊗ z R K )r(z K ⊗ u a i ) = r(u i i ⊗ z)r(z ⊗ u i i ) = q 2− 2r n =: λ,(13) where we have used the identity r(u k k ⊗ z I I ) = r(z I I ⊗ u k k ) = q δ k,I , for δ k,I = 1 if k ∈ I, and δ k,I = 0, if k / ∈ I, proved in [15,Lemma 2.1]. An analogous argument shows that Q ii (z) = λ −1 , for all i. Building on this lemma, we next produce a set of necessary requirements for nonvanishing of the maps Q ij on the generators of C q [Gr n,r ]. Moreover, we have Q ij (z RR c ) = Q ij (1). Proof. Note first that Q ij (z IJ ) = |K|=r |L|=n−r n a=1 r(u i a ⊗ z IJ KL )r(z KL ⊗ u a j ) = |K|=r |L|=n−r n a,b,c=1 r(u i a ⊗ z J L )r(u a b ⊗ z I K )r(z K ⊗ u b c )r(z L ⊗ u c j ) = |L|=n−r n a,c=1 r(u i a ⊗ z J L )Q ac (z I )r(z L ⊗ u c j ) = |L|=n−r i a=1 j c=1 r(u i a ⊗ z J L )Q ac (z I )r(z L ⊗ u c j ). Now r(z L ⊗ u c j ) = 0 only if c ≤ j, which cannot happen if c = j since R c cj would contain a repeated element and could not be equal to I. Hence Q ij (z IJ ) = |L|=n−r i a=1 j c=1 r(u i a ⊗ z J L )Q ac (z I )r(z L ⊗ u c j ) = i a=1 r(u i a ⊗ z J )Q aj (z I )r(z R c ⊗ u j j ). If we now assume that I = R, then by the above lemma, the constant Q aj (z I ) = 0 only if (a, j) ∈ R × R c . Since r(u i a ⊗ z J ) = 0 only if J = R c ia , and we have assumed that (i, j) / ∈ R c × R c , we can get a non-zero result only if i = a. Thus, we have shown that Q ij (z IJ ) = 0 only if I = R ij and J = R c . If we instead assume that I = R, then an analogous argument will show that we get a non-zero result only when J = R c ij . The identity Q ij (z RR c ) = Q ij (1) follows the Lemma 4.6.2 above. It can also be derived from Laplacian expansion as follows. In (10), setting I = J = {1, . . . , n} and J 1 = R gives 1 = det = |I 1 |=r (−q) ℓ(I 1 ,I c 1 ) z I 1 R z I c 1 R c = |I 1 |=r (−q) ℓ(I 1 ,I c 1 ) z I 1 z I c 1 = |I 1 |=r (−q) ℓ(I 1 ,I c 1 ) z I 1 I c 1 . As we just established, Q ij (z I 1 I c 1 ) = 0 unless I 1 = R. Hence, as required Q ij (1) = |I 1 |=r (−q) ℓ(I 1 ,I c 1 ) Q ij (z I 1 I c 1 ) = Q ij (z RR c ). The Heckenberger-Kolb Calculus as the Restriction of Ω 1 q (SU n , r) Using the results of the previous subsection, we present the Heckenberger-Kolb calculus as the restriction of the calculus Ω 1 q (SU n , r), reproduce its decomposition into Ω (1,0) and Ω (0,1) , and introduce a basis of Φ(Ω 1 q (Gr n,r )). Lemma 4.8 The subspaces V (1,0) := {b ij | (i, j) ∈ R c × R }, V (0,1) := {b ij | (i, j) ∈ R × R c } are non-isomorphic right C q [SU r ] ⊗ C q [U n−r ]-subcomodules of Λ 1 Cq[SUr]⊗Cq[U n−r ] . Moreover, they are right C q [SU n ]-submodules. Proof. A routine application of the right adjoint coaction will verify that V (1,0) and V (0,1) are subcomodules of Λ 1 . A direct comparison of weights confirms that the two comodules are non-isomorphic. The fact that V (1,0) is a submodule follows from the calculation Q ab (u i j u k l ) = 0, for all (i, j) ∈ R c × R; (a, b) / ∈ R × R c ; k, l = 1, . . . , n. A similar argument shows that V (0,1) is a submodule. Proposition 4.9 The restriction of Ω 1 q (SU n , r) to C q [Gr n,r ] is the Heckenberger-Kolb calculus. Moreover, a choice of complex structure is given by 0) , Ω (1,0) := C q [SU n ] H V (1,Ω (0,1) := C q [SU n ] H V (0,1) . Proof. It follows from Lemma 4.7 that span C [(z IJ ) + ] | |I| = r, |J| = n − r ⊆ V (1,0) ⊕ V (0,1) . Now it is clear that C q [Gr n,r ] + is generated as a right C q [Gr n,r ]-module by the set z IJ + | |I| = r, |J| = n − r . Thus, since the above lemma tells us that V (1,0) ⊕ V (0,1) is a right submodule of Λ 1 , we must have that Φ(Ω 1 ) = [m + ] | m ∈ C q [Gr n,r ] ⊆ V (1,0) ⊕ V (0,1) , where Ω 1 denotes the restriction of Ω 1 q (SU n , r) to C q [Gr n,r ]. Lemma 4.7 tells us that each non-zero [ z IJ + ] is contained in either V (1,0) or V (0,1) . Thus, since V (1,0) and V (0,1) are submodules of V (1,0) ⊕ V (0,1) , we must have a decomposition Φ(Ω 1 ) = W (1,0) ⊕ W (0,1) := Φ(Ω 1 ) ∩ V (1,0) ⊕ Φ(Ω 1 ) ∩ V (0,1) as well as a corresponding decomposition of calculi. Now Theorem 4.1 tells us that there can exist no non-trivial calculus of dimension strictly less than dim Φ(Ω (1,0) ) . Thus the inequality (Ω (1,0) ) , implies that W (1,0) = V (1,0) , or W (1,0) = 0. To show that the former is true we need only show that there exists a non-zero [m] ∈ V (1,0) , for some m ∈ C q [Gr n,r ]. This is easily done by specialising the calculations of Lemma 4.6 and Lemma 4.7 to the simple case of z RR c r+1,1 , and showing that Q r+1,1 z RR c r+1,1 = 0. A similar argument establishes that W (0,1) = V (0,1) , and so, Ω 1 is equal to Ω 1 q (Gr n,r ), Ω (1,0) ⊕ Ω (1,0) , or Ω (1,0) ⊕ Ω (1,0) . That the latter is true follows from the fact that V (1,0) and V (0,1) are non-isomorphic, and so, Ω 1 is isomorphic to the Heckenberger-Kolb calculus as claimed. dim W (1,0) ≤ dim V (1,0) = dim ΦCorollary 4.10 For all (i, j) ∈ R c × R, there exist non-zero constants λ ij such that [z R ij R c ] = λ ij b ji , [z RR c ji ] = λ ji b ij . Proof. Since Ω 1 q (Gr n,r ) ∈ H Mod 0 , the elements [z R ij R c ], [z RR c ji ], for i, j ∈ R × R c , have to span V (1,0) ⊕ V ( Two Quantum Principal Bundles In this section we use the calculus Ω 1 q (SU n , r) to give a quantum principal bundle presentation of the Heckenberger-Kolb calculus Ω 1 q (Gr n,r ) in terms of the Hopf-Galois extensions C q [Gr n,r ] ֒→ C q [S n,r ] and C q [Gr n,r ] ֒→ C q [SU n ]. We then construct principal connections for these bundles and show that the covariant holomorphic structures induced on the line bundles over C q [Gr n,r ] are the unique such structures. The Extension C q [Gr n,r ] ֒→ C q [S n,r ] We begin by verifying the properties of a quantum principal bundle for the differential structure induced by Ω 1 q (SU n , r) on the extension C q [Gr n,r ] ֒→ C q [S n,r ]. We then show that the universal calculus principal connection associated to the ℓ-map introduced in §3.4 descends to a principal connection for the bundle. We begin by introducing a Hopf algebra map useful throughout this section: ρ : C q [SU n ] → C[U 1 ], u 1 1 → t, u n n → t −1 , u i j → δ ij 1, for (i, j) = (1, 1), (n, n). Throughout, to lighten notation, we denote H ′ := C q [U r ] ⊗ C q [SU n−r ]. Lemma 5.1 A commutative diagram is given by Ω 1 u (S n,r ) U / / ver C q [SU n ] H ′ C q [S n,r ] + . id⊗ρ s s ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ ❢ C q [SU n ] ⊗ C[U 1 ] + 2. The Hopf-Galois extension C q [Gr n,r ] ֒→ C q [S n,r ] endowed with the restriction of Ω 1 q (SU n , r) to C q [S n,r ] is a quantum principal bundle. Proof. 1. Recalling the isomorphism U : Ω 1 q (S n,r ) ≃ C q [SU n ] H ′ C q [S n,r ] + , we see that (id ⊗ ρ) • U dz I = z I ⊗ (t − 1) = ver d u z I , (id ⊗ ρ) • U dz J = z J ⊗ (t −1 − 1) = ver d u z J . Let N ⊆ Ω 1 u (S n,r ) be the sub-bimodule corresponding to the restriction of the calculus Ω 1 q (SU n , r) to C q [S n,r ]. Denoting J : = I Cq[SUn] ∩ C q [S n,r ] + , Takeuchi's equivalence implies that U(N ) = C q [SU n ] H ′ J. 2. Commutativity of the diagram above implies that ver(N ) = (id ⊗ ρ) C q [SU n ] H ′ J .(14) Denote by T 1 the subspace of C q [S n,r ] + spanned by monomials in the elements z, z, and denote T 2 the subspace spanned by monomials which contain as a factor at least one generator not equal to z or z. Clearly, C q [S n,r ] + ≃ T 1 ⊕ T 2 . Moreover, Lemma 4.6 implies that J is homogeneous with respect to this decomposition allowing us to write J ≃ J 1 ⊕ J 2 . It is easily seen that T 1 and T 2 are sub-comodules of C q [S n,r ] + with respect to the left C q [SU r ]⊗ C q [SU n−r ]-coaction of C q [S n,r ]. Moreover, the coaction acts trivially on T 1 . Hence we have ver(N ) =(id ⊗ ρ) C q [SU n ] H ′ J =(id ⊗ ρ) C q [SU n ] ⊗ J 1 ⊕ C q [SU n ] H ′ J 2 =C q [SU n ] ⊗ ρ(J 1 ) =C q [SU n ] ⊗ ρ(J). Finally, it follows from Lemma 4.5, and the fact that ρ is Hopf algebra map, that ρ(J) is Ad-coinvariant right ideal of C[U 1 ], and so, we have a quantum principal bundle. We now move on to showing that ℓ induces a well-defined connection for the bundle. In the process we find a basis for Λ 1 and an alternative description of the action of ℓ. Throughout, to lighten notation, we denote H ′ := C q [SU r ] ⊗ C q [SU n−r ]. Proposition 5.2 1. Λ 1 C[U 1 ] = H ⊗ C[t − 1]. 2. Let i : C[U 1 ] → C q [S n,r ] be the linear map defined by i(t k ) := z k , i(1) := 1, and i(t −k ) = z k . A commutative diagram is given by Ω 1 u (S n,r ) C q [SU n ] H ′ (C q [S n,r ]) + . U −1 o o C q [SU n ] ⊗ C[U 1 ] + m•(id⊗ℓ) Similarly, it can be shown that Q i(t −k − λt −k−1 ) = 0, for all k ≥ 1. Hence we have shown that i(ρ(ker(Q))) ⊆ ker(Q). The fact that Π ℓ descends to a well-defined non-universal connection now follows from i(ρ(I Cq[SUn] )) = i ρ(ker(Q) + ) ⊆i ρ(ker(Q) ∩ C q [SU n ] + ⊆ ker(Q) ∩ C q [SU n ] + = ker(Q) + . Holomorphic Strucutures In this subsection we show that the covariant connections induced on E k by Π ℓ restrict to covariant holomorphic structures for E k , and that these are the unique such structures. Following the same approach as in [40], we convert these questions into representation theory using the following two observations: Since the difference of two connections is always a left module map, the difference of two covariant ∂-operators for any object in G M Mod is always a morphism. Moreover, since the curvature operator of any connection is always a left module map, the curvature operator of a covariant ∂-operator is a morphism in G M Mod. As explained in §2.2.4, for any line bundle E k , the connection coming from the principal connection Π ℓ induces a ∂-operator by projection. We denote this operator by ∂ E k Lemma 5.3 The ∂-operator ∂ E k is the unique covariant ∂-operator for E k . Moreover, it is a holomorphic structure. Proof. If there exists a morphism E k → Ω (0,1) ⊗ Cq[Grn,r] E k in G M Mod, then it is clear that there exists a morphism C q [Gr n,r ] = E 0 → Ω (0,1) ⊗ Cq[Grn,r] E 0 ≃ Ω (0,1) . However, since Theorem 4.1 implies that Φ(Ω (0,1) ) is irreducible as an object in H Mod, no such morphism exists. Thus, by the comments at the beginning of the subsection, these can exist no other covariant ∂-operator for E k . Similarly, there exists a morphism E k → Ω (0,2) ⊗ Cq[Grn,r] E k only if Φ(Ω (0,2) ) contains a copy of the trivial comodule. It follows from [36, Lemma 5.1] that Φ(Ω (0,2) ) is isomorphic to a quotient of Φ(Ω (0,1) ) ⊗2 . An elementary weight argument will show that Φ(Ω (0,1) ) ⊗2 does not contain a copy of the trivial comodule, and so, ∂ E k is a covariant holomorphic structure. Corollary 5.4 The principal connection Π ℓ is the unique left C q [SU n ]-covariant principal connection for the bundle Ω 1 q (Gr n,r ) ֒→ Ω 1 q (S n,r ). Proof. Assume that there exists a second covariant principal connection Π ′ . Extending the argument of the above lemma, it is easy to show that there exists only one left C q [SU n ]-covariant connection for each E k . Hence, for any e ∈ E k , (id − Π) • de = ∇ ′ (e) = ∇(e) = (id − Π ℓ ) • de, implying that Π ′ (d(e) = Π ℓ (de). Now since C q [S n,r ] ≃ k∈Z E k , every element of Ω 1 q (S n,r ) is a sum of elements of the form gde, for g ∈ C q [SU n ]. Thus, since Π ℓ and Π ′ are both left C q [SU n ]-module maps, they are are equal. The Extension C q [Gr n,r ] ֒→ C q [SU n ] In our proof of the Borel-Weil theorem in the next section, it proves very useful to have an alternative description of Π ℓ as the restriction of a principal connection for the bundle C q [Gr n,r ] ֒→ C q [SU n ]. Proposition 5.5 1. The Hopf-Galois extension C q [Gr n,r ] ֒→ C q [SU n ] endowed with the first-order calculus Ω 1 q (SU n , r) is a quantum principal bundle. 2. A C q [U r ] ⊗ C q [SU n−r ]-comodule complement to V Cq[Grn,r] in Λ 1 Cq[Ur]⊗Cq[SU n−r ] is given by V 0 := span C {b ij | i, j = 1, . . . , r}.(16) Moreover, it is the unique such complement, implying that the corresponding left C q [SU n ]-covariant principal connection Π is the unique such connection. 3. The principal connection Π restricts to Π ℓ on Ω 1 q (S n,r ). Proof. 1. This follows from an argument analogous to the proof of Lemma 5. (16) for the case of r = 1, that is the case of C q [CP n ], is routine, see [35] for details. Hence, we concentrate on the case of n ≥ 2. A direct examination shows that V 0 decomposes into a one-dimensional comodule and an irreducible r 2 − 1-dimensional comodule. A direct comparison of dimensions will show that these two comodules are pairwise non-isomorphic to V (1,0) and V (0,1) . Hence the decomposition, and the corresponding covariant principal connection, are unique as claimed. 3. It follows from Lemma 5.1.1 and Proposition 5.2.2 that the principal connection Π ℓ acts on C q [SU n ] H Φ(Ω 1 q (S n,r ) as id ⊗ (i • ρ). Thus to show that Π restricts to Π ℓ it suffices to show that proj V 0 restricts to i • ρ on Φ Ω 1 q (S n,r ) . Since both maps vanish on V Cq[Grn,r] , we need only show that they agree on a complement to V Cq[Grn,r] . Proposition 5.2.1, and the fact that ρ(z − 1) = t − 1, imply that such a complement is given by C[z − t], and that i • ρ acts on C[z − 1] as the identity. Moreover, Lemma 4.6 implies that [z − 1] ∈ V 0 , and so, proj V 0 also acts on [z − 1] as the identity. Thus Π restricts to Π ℓ on C q [S n,r ] as required. Remark 5. 6 We have not considered here any lifting of Π to a principal connection at the universal level. Such a lifting, however, plays an important role in [38] where it is used to give a Nichols algebra description of the calculus Ω • q (Gr n,r ). A Borel-Weil Theorem for the Quantum Grassmannians In this section we establish a q-deformation of the Borel-Weil theorem for the Grassmannians, the principal result of the paper. We then observe that the theorem gives a presentation of the twisted homogeneous coordinate ring H q (Gr n,r ) deforming the classical ample bundle presentation of H(Gr n,r ). The proof of the theorem is expressed in terms of a sequence of twisted derivation algebras embedded in a sequence of modules over a ring generated by certain quantum minors. These novel sequences have many interesting properties and will be explored in further detail in a later work. A Sequence of Twisted Derivation Algebras For any j ∈ {r + 1, . . . , n}, denoting j ′ := n − j + 1, let Z j be the subalgebra of C q [SU n ] generated by the elements z I | I ⊆ {j ′ , . . . , n} . Consider also T j the two-sided ideal of Z j generated by the elements z J ∈ Z j | J ∩ {r + 1, . . . , j − 1} = ∅ , as well as the quotient algebras S j := Z j /T j . For any x ∈ Z j , we denote its coset in S j by [x] S j . Note that, for any j, we have an obvious embedding Z j ֒→ Z j+1 , which restricts to an embedding T j ֒→ T j+1 . Thus we have a sequence S r ϕr −→ S r+1 ϕ j+1 −→ · · · S n−1 ϕ n−1 −→ S n ε −→ C. Note that all of the maps ϕ j have non-trivial kernel, and that the sequence does not form a complex. A twisted derivation algebra is a triple (A, σ, d), where A is an algebra, σ : A → A is an algebra automorphism, and d is a a linear map, called the twisted derivation, satisfying d(ab) = d(a)b + σ(a)d(b), for all a, b ∈ A. We also introduce the following projection map proj j : C q [SU n ] ⊗ V (0,1) → C q [SU n ], (k,l)∈R×R c f kl ⊗ b kl → f j ′ j . Proposition 6.2 The triple (S j , σ j , d j ) is a twisted derivation algebra, where σ j is the algebra automorphism of S j defined by σ j [z I ] S j := q δ J j +δ J j ′ [z I ] S j , and d j is the σ j -twisted derivation of S j uniquely defined by d j [z J ] S j := proj j (∂ Cq[SUn] (z J )) S j .(17) Proof. As is easily checked, an algebra automorphism σ j of C q [SU n ] is defined by σ j (u k l ) = q δ lj +δ lj ′ u k l . This restricts to an automorphism of Z j which maps T j to itself, and hence induces the required algebra automorphism of S j . Consider now the obvious lifting of d j to a map d ′ j : Z j → S j . Let us show that this map is a twisted derivation. Since ∂ G is a derivation, and proj j is a left C q [SU n ]-module map, it is clear that we only need to show that [proj j (ωz)] S j = [proj j (ω)] S j σ j [z] S j , for any ω ∈ C q [SU n ] ⊗ V (0,1) , z ∈ Z j . Moreover, this equality is implied by the identities, for any z J ∈ Z j , proj j (1 ⊗ b kl ) ⊳ z J ∈ T j , for all (k, l) = (j ′ , j),(18)proj j (1 ⊗ b j ′ j ) ⊳ z J = σ j (z I ).(19) For the first identity note that Lemma 4.3 implies proj j (1 ⊗ b kl ) ⊳ z J = A proj j z A ⊗ (b kl ⊳ z A J ) = q 2 n ν −1 A proj j z A ⊗ [u k l z A J ] = q 2 n ν −1 A proj j (k,l)∈R×R c Q ba (u k l z A J )z A ⊗ b ab = q 2 n ν −1 A Q jj ′ (u k l z A J )z A . A routine application of Goodearl's formulae shows that Q jj ′ (u k l z A J ) = 0 only if A = (J kj ′ ) jl and k ≤ j ′ ; l ≤ j. Since by assumption J ⊆ {j ′ , . . . , n}, we must have k = j ′ , and hence that A = J jl . Next our assumption that (k, l) = (j ′ , j) means we must have l ∈ {r + 1, . . . , j − 1}. Hence, z J jl ∈ T j as required. The identity (18) is proved similarly. Hence d ′ j is a twisted derivation. Finally we show that d ′ j descends to a σ j -twisted derivation on S j by showing that it vanishes on the generators of T j . When J ∩ {r + 1, . . . , j − 1} = ∅, we clearly have J jj ′ ∩ {r + 1, . . . , j − 1} = ∅, and so, [z J jj ′ ] S j = 0. Hence, by Lemma 6.1, we have that d ′ j vanishes on the generators of T j as required. For any k ∈ R c , denote P k := {k, . . . , n}. Moreover, let P l k be the index set constructed from P k by replacing each of its first l elements p by p ′ and then reordering. Finally, we denote by P (S r+1 ) the subset of S r+1 consisting of those elements which are products of elements of the form [z P k ] S r+1 . The following lemma is an easy consequence of the Goodearl formulae and Lemma 6.1, and so we omit its proof. Corollary 6.4 For every p ∈ P (S r+1 ), there exists a unique sequence (a r+1 , . . . , a n ) of non-negative natural numbers, at least one of which is non-zero, such that ε • d an n • ϕ n−1 • d a n−1 n−1 • · · · • ϕ j • d a r+1 r+1 (p) = 0. Lemma 6.7 Each right comodule in the decomposition of C q [S n,r ] into irreducible right C q [SU n ]-comodules contains a highest weight vector of the form k≥0 x k s k , for x k ∈Z, s k ∈ Z r+1 such that ε(x 0 ) = 0, ε(x k ) = 0, for all k = 0, and [s 0 ] S r+1 ∈ P (S r+1 ). Proof. Note first that an element of C where A is an (r × k)-array of boxes, and C is an ((n − r) × l)-array of boxes, for some k, l ∈ N 0 . The space of C q [U r ]-coinvariant elements of the corresponding comodule V is spanned by those standard monomials z T , where T is a semi-standard tableau for which the columns in A are filled as {1, . . . , r}. Regarding V now as a C q [SU n−r ]-comodule, we see that it is isomorphic to the tensor product of the comodules V (B) and V (D) corresponding to B and D respectively. Hence, the space of C q [SU n−r ]-coinvariant elements of the corresponding comodule is non-trivial if and only if V (B) is dual to V (D). Moreover, we see that a Young diagram admits a C q [SU r ] ⊗ C q [SU n−r ]-coinvariant element only if it admits a partition of the form (20). A standard highest weight argument using the Peter-Weyl decomposition and the isomorphism (1) now imply that each irreducible comodule in the decomposition of C q [S n,r ] contains a highest weight comodule of the stated form. In the statement of the theorem below, as well as in Lemma 6.12, we find it useful to adopt the following conventions λ(r, k) := k, . . . , k r , 0, . . . , 0 n−1−r ∈ Dom(n − 1), V (r, k) := V λ(r, k) . The inclusion of k∈N 0 V (r, k) in the kernel of ∂ Cq[SUn] is a direct consequence of Lemma 6.1. To establish the opposite inclusion, consider any irreducible right C q [SU n ]-comodule U in the decomposition of C q [S n,r ] which is not equal to V (r, k), for any k ∈ N 0 . Moreover, let k≥0 x k s k ∈ U be the highest weight vector presented in the above lemma. Proposition 6.4 andZ-linearity ofd j imply that we have a unique set of integers (a r+1 , . . . , a n ), at least one of which is non-zero, such that ε •d an n • · · · •φ j •d a r+1 j [x 0 s 0 ] S r+1 = ε(x 0 ) ε • d an n • · · · • ϕ j • d a r+1 j [s 0 ] S r+1 = 0. For any of the other summands x k s k we have ε •d an n •φ n · · · •φ j •d a r+1 j [x k s k ] S r+1 = ε(x k ) ε • d an n • · · · • ϕ j • d a r+1 r+1 [s k ] S r+1 = 0. From the definition of d j , for each j, it is clear that this could not happen if k≥0 x k s k were contained in the kernel of ∂ Cq [SUn] . Hence, since ∂ Cq[SUn] is a comodule map, we can conclude that U is not contained in the kernel of ∂ Cq [SUn] , and that the identity in (21) holds as required. Corollary 6.9 The calculus Ω 1 q (Gr n,r ) is connected. Proof. Note that m ∈ ker d : C q [Gr n,r ] → Ω 1 q (Gr n,r ) if and only if m ∈ ker(∂) ∩ ker(∂). Now the above theorem states that for the special case of E 0 = C q [Gr n,r ], we have H 0 (E 0 ) = ker ∂ = C. Hence ker(d) = C as required. The Twisted Homogeneous Coordinate Ring The twisted homogeneous coordinate ring of a general flag manifold G 0 /L 0 was introduced in [29,44]. In the q = 1 case it reduces to the homogeneous coordinate ring of G 0 /L 0 with respect to the Plücker embedding [14, §6.1]. We consider now the special case of the quantum Grassmannians. (These are subalgebras of the bialgebra C q [M n ] which is defined as C q [GL n ] without the generator det −1 n , see [35, §3].) Definition 6.10. The twisted Grassmannian homogeneous coordinate ring H q (Gr n,r ) is the subalgebra of C q [M n ] generated by the quantum minors z I , for |I| = r, which are defined just as in the C q [SU n ] case. Clearly, H q (Gr n,r ) is a left subcomodule of C q [SU n ]. The subcomodule H(Gr n,r ) * is defined to the subalgebra of C q [M n ] generated by the quantum minors z J , for |J| = n−r, which are again defined just as in the C q [SU n ] case. The image of the multiplication map m : H(Gr n,r ) ⊗ H(Gr n,r ) * → C q [SU n ] is clearly equal to C q [S n,r ]. Hence we have a presentation of the coordinate algebra C q [Gr n,r ] as the C[U 1 ]-coinvariant part of a quotient of H(Gr n,r ) ⊗ H(Gr n,r ) * . This Theorem 1 . 1 ( 11Borel-Weil) It holds that d(ab) = a(db) + (da)b, a, b ∈ A, and for which Ω 1 = span C {adb | a, b ∈ A}. A morphism between two first-order differential calculi (Ω 1 (A), d Ω ) and (Γ 1 (A), d Γ ) is a bimodule map ϕ : Ω 1 (A) → Γ 1 (A) such that ϕ • d Ω = d Γ . Note that when a morphism exists it is unique. The direct sum of two first-order calculi (Ω 1 (A), d Ω ) and (Γ 1 (A), d Γ ) is the calculus (Ω 1 (A) ⊕ Γ 1 (A), d Ω + d Γ ). for m, n ∈ M. Any covariant calculus Ω 1 (M ) is naturally an object in G M Mod M . Using the surjection Ω 1 u (M ) → Ω 1 (M ), it can be shown that there exists a subobject I ⊆ M + (where M + Theorem 2 .10 [ 8 , 28Theorem 3.19] For a Hopf-Galois extension M = P co(H) , it holds that: where ℓ(S, T ) := {(s, t) ∈ S × T | s > t} , for S, T ⊆ {1, . . . , n} [33, §3.1]. Moreover, the coproduct acts on z I J according to J = z J I ij := 0, for all index sets J. Proposition 2. 11 ( 11Goodearl formulae) [15, §2] It holds that, for i, j = 1, . . . if i ≥ j and J = I ji , 2. r(z I J , u i j ) = 0 only if i ≤ j and J = I ji . Lemma 2. 12 ( 12Laplacian expansions) For I, J ⊆ {1, . . . , n} with |I| = |J|, and J 1 a choice of non-empty subset of J, it holds that C q [U r ] and C[SU n−r ] are both cosemisimple, the 1-dimensional corepresentations of C q [U r ] ⊗ C[SU n−r ] will be exactly those which are tensor products of a 1-dimensional corepresentation of C q [U r ] and a 1-dimensional corepresentation of C[SU n−r ]. The classification of comodules presented in §2.3.3 now implies that all such comodules are of the form Proposition 3. 9 9The algebra C q [S n,r ] is generated by the elements {z I , z J | I, J ⊆ {1, . . . , r}; |I| = r, |J| = n − r}. Example 3. 10 . 10For the special case of r = n − 1, the generating set (11) reduces to the well-known set of generators for C q [S 2n−1 ] {z i := S(u n i ), z i := u i n | i = 1, . . . , n}. Definition 3. 12 . 12A right H-comodule algebra P is called cleft if there exists a convolution-invertible right H-comodule map j : H → P . Lemma 3. 13 13The extension C q [Gr n,r ] ֒→ C q [S n,r ] is non-cleft. Lemma 4. 6 6For i, j = 1, . . . , n, it holds that 1. Q ij (z I J ) = 0 only if I = J ij , or equivalently only if J = I ji , 2. Q ii (z) = λ and Q ii (z) = λ −1 , for a certain non-zero λ ∈ C. Lemma 4. 7 7For i, j = 1, . . . , n, and (i, j) / ∈ R c × R c , it holds that Q ij (z IJ ) = 0 only if I = R ij and J = R c , or I = R and J = R c ij . 0,1) , and so, they must all be non-zero. The fact that [z R ij R c ] is proportional to the basis element b ji = [u j i ], and [z RR c ji ] is proportional to the basis element b ij = [u i j ], follows from Lemma 4.7 and the fact that Q ij (u k l ) is non-zero if and only if k = j and l = i [35, §4]. d j [z J ] S j = 0, if j / ∈ J, 2. d 2 j [z J ] S j = 0, for all J ⊆ {1, . . . , n}, 3. d k+l [z P l k ] a S k+l = a!(−q) − al|J | n ν a [z P l+1 k ] a S k+l , for a ∈ N. Theorem 6. 8 ( 8Borel-Weil) It holds thatH 0 (E k ) = V (r, k), H 0 (E −k ) = 0, k ∈ N 0 .Proof. It is clear from the construction of ∂ Cq[SUn] , and the construction of the holomorphic structure for each line bundle, that the theorem would follow from a demonstration thatker ∂ Cq[SUn] C q [SU n ]| Cq[S n,r ] : C q [S n,r ] → C q [SU n ] ⊗ V (0,1) = k∈N 0 V (r, k). would like to thank Edwin Beggs, Tomasz Brzeziński, Freddy Díaz, Matthias Fischmann, Piotr Hajac, István Heckenberger, Masoud Khalkhali, Shahn Majid, Petr Somberg, Karen Strung, Adam-Christiaan van Roosmalen, and Elmar Wagner, for useful discussions. 1. 2. Note first that Lemma 4.8 implies that V Cq[Grn,r] C q [SU n ] = V Cq[Grn,r] . Moreover, it follows from Lemma 4.3 and Corollary 4.10 that Λ 1 ≃ V 0 ⊕ V Cq[Grn,r] . Uniqueness of the decomposition in q [SU n ]-comodule is coinvariant under C q [SU r ] ⊗ C q [SU n−r ] ifand only if it is coinvariant under C q [SU r ] and C q [U n−r ]. Consider next a Young diagram admitting a partition of the formA B C D Q i(t k − λt k−1 ) = Q(z k − λz k−1 ) = Q (z − λ1)z k = 0,for all k ≥ 1. 3. The principal connection Π ℓ corresponding to ℓ descends to a principal connection for the bundle.Proof.Lemma 4.6.2 now implies that z − λ1 ∈ ker(Q), and so, t − λ1 ∈ ρ(ker(Q)), which implies that dim H/ρ(ker(Q)) ≤ 1. Hence Λ 1 C[U 1 ] is a 1-dimensional subspace spanned by the element [z − 1].2. This is easily confirmed by direct calculation.3. Recalling (7) we see that Π ℓ descends to non-universal connection ifIt now follows from (15) that dim(H/ρ(ker(Q))) = 1, and that ρ(ker(Q))) is spanned by the elementsProof. An elementary weight argument can be used to show that S j ∩T j = T j , and so, the map Z j ֒→Z j is an inclusion, for all j. It follows from Lemma 6.1, that for any z J ∈Z, we haveBy assumption {1, . . . , r} ⊆ J, and so, the index set J lk is not defined for any pair (k, l) ∈ R × R c , and so, z J lk = 0. This means that a leftZ-module map is defined byMoreover, it is clear from the definition ofT j that this map descends to a well-defined leftZ-module mapS j →S j extending d j , and that it is the unique such map.Remark 6.6 It is natural to consider the extension of the definitionS j to a bimodule overZ. However, as is easily checked, d j does not have a natural extension to this space.The Borel-Weil TheoremIn proving our q-deformation of the Borel-Weil theorem, the most difficult part is demonstrating non-holomorphicity of an element of any line bundle E k . Roughly, our approach is to decompose E k into its irreducible left C q [SU n ]-comodules, choose a workable description of their highest weights, and then use the above sequence and Corollary 6.4 to show that they do not vanish under ∂ Cq[SUn].We begin with the workable description of the highest weight vectors. Since the details of the proof are for the most part completely classical applications of the corepresentation theory of §2.3.3, it is omitted.is the special Grassmannian case of the general flag manifold construction discussed in the introduction. As the following proposition shows, the theory of holomorphic structures allows us to go the other direction and construct H q (Gr n,r ) from C q [Gr n,r ] by generalising the classical ample line bundle bundle presentation of H(Gr n,r ).Theorem 6.11 The canonical projection proj : C q [M n ] → C q [SU n ] restricts to an algebra isomorphismProof. The classification of comodules of C q [SU n ] in §2.3.3 is easily seen to imply that canonical projection C q [M n ] → C q [SU n ] restricts to an injection on H q (Gr n,r ). Thus we can identify H q (Gr n,r ) with its image in C q [SU n ]. Since Theorem 6.8 tells us that k∈N 0 H 0 (E k ) is generated by z I , for |I| = r, it is clear that the two algebras are isomorphic.The Opposite Complex StructureLet Ω (•,•) denote the complex structure for Ω • (Gr n,r ) opposite to the one introduced in Proposition 4.9. Using an argument analogous to the one above, each line bundle can be shown to have a unique covariant holomorphic structure with respect to this complex structure. This causes the Borel-Weil theorem to vary as follows.Lemma 6.12 With respect to the complex structure Ω (•,•) , we haveMoreover, the canonical projection proj : C q [M n ] → C q [SU n ] restricts to an algebra isomorphism H q (Gr n,r ) * ≃ k∈N 0 H 0 (E −k ). We denote byZ j theZ-submodule of C q [SU n ] generated by the elements of Z j . Moreover, we denote byT j theZ-submodule ofS j generated by the elements of T j , and byS j the quotient moduleZ j /T j . Moreover, for any x ∈Z j , we denote its coset inS j by. x]S jWe denote byZ j theZ-submodule of C q [SU n ] generated by the elements of Z j . Moreover, we denote byT j theZ-submodule ofS j generated by the elements of T j , and byS j the quotient moduleZ j /T j . Moreover, for any x ∈Z j , we denote its coset inS j by [x]S j . Moreover d j extends uniquely to a leftZ-module mapd j :Z →Z, and ϕ j extends uniquely to a left Z-module mapφ j :S j →S j. Corollary 6.5 A C-linear inclusion S ֒→S is defined by [x] S → [x]S .Corollary 6.5 A C-linear inclusion S ֒→S is defined by [x] S → [x]S . Moreover d j extends uniquely to a leftZ-module mapd j :Z →Z, and ϕ j extends uniquely to a left Z-module mapφ j :S j →S j . Representations of quantum algebras. H H Andersen, P Polo, K Wen, Invent. Math. 104H. H. Andersen, P. Polo, K. Wen, Representations of quantum algebras, Invent. Math., 104, 1-59, (1991). Pimsner algebras and Gysin sequences from principal circle actions. F Arici, J Kaad, G Landi, J. Noncommut. Geom. 10F. Arici, J. Kaad, G. Landi, Pimsner algebras and Gysin sequences from prin- cipal circle actions, J. Noncommut. Geom., 10, 29-64, (2016). Twisted homogeneous coordinate rings. M Artin, M Van Den, Bergh, J. Algebra. 133M. Artin, M. van den Bergh, Twisted homogeneous coordinate rings, J. Al- gebra, 133, 249-271, (1990). Noncommutative complex differential geometry. E Beggs, S P Smith, J. Geom. Phys. 72E. Beggs, S. P. Smith, Noncommutative complex differential geometry, J. Geom. Phys., 72, 7-33, (2013). An extension of the Borel-Weil construction to the quantum group U q (n). L C Biedenharn, M A Lohe, Comm. Math. Phys. 146L.C. Biedenharn, M. A. Lohe, An extension of the Borel-Weil construction to the quantum group U q (n), Comm. Math. Phys., 146, 483-504, (1992). Homogeneous vector bundles. R Bott, Ann. of Math. 66R. Bott, Homogeneous vector bundles, Ann. of Math., 66, 203-248, (1957). Quantum teardrops. T Brzeziński, S A Fairfax, Comm. Math. Phys. 316T. Brzeziński, S. A. Fairfax, Quantum teardrops, Comm. Math. Phys., 316, 151-170, (2012). T Brzeziński, Galois Structures. T. Brzeziński, Galois Structures, https://www.impan.pl/swiat-matematyki/ notatki-z-wyklado~/brzezinski_gs.pdf, (2008). Quantum group gauge theory on quantum spaces. T Brzeziński, S Majid, Comm. Math. Phys. 157T. Brzeziński, S. Majid, Quantum group gauge theory on quantum spaces, Comm. Math. Phys., 157, 591-638, (1993). Algebraic deformations of toric varieties I. General constructions. L Cirioa, G Landi, R Szabo, Adv Math. 246L. Cirioa, G. Landi, R. Szabo, Algebraic deformations of toric varieties I. General constructions, Adv Math., 246, 33-88, (2013). Dirac operators on quantum projective spaces. F Andrea, L Dabrowski, Comm. Math. Phys. 295F. D'Andrea, L. Dabrowski, Dirac operators on quantum projective spaces, Comm. Math. Phys., 295, 731-790, (2010). Dolbeault-Dirac spectral triples. B Das, R Buachalla, P Somberg, in preparationB. Das, R.Ó Buachalla, P. Somberg, Dolbeault-Dirac spectral triples, (in preparation). Quantized flag manifolds and irreduciblerepresentations. M S Dijkhuizen, J V Stokman, Comm. Math. Phys. 203M. S. Dijkhuizen, J. V. Stokman, Quantized flag manifolds and irreducible- representations, Comm. Math. Phys., 203, 297-324 (1999). . N Gonciulea, V Lakshmibai, Flag Varieties, HermannN. Gonciulea, V. Lakshmibai, Flag Varieties, Hermann, 2001. Commutation relations for arbitrary quantum minors. K R Goodearl, Pacific J. Math. 228K. R. Goodearl, Commutation relations for arbitrary quantum minors, Pacific J. Math., 228, 63-102, (2006). Geometry of quantum homogeneous vector bundles and representation theory of quantum groups. I. A R Gover, R B Zhang, Rev. Math. Phys. 11A. R. Gover, R. B. Zhang, Geometry of quantum homogeneous vector bundles and representation theory of quantum groups. I, Rev. Math. Phys., 11, 533-552, (1999). Quantum cluster algebra structures on quantum Grassmannians and their quantum Schubert cells: the finite-type cases. J Grabowski, S Launois, Int. Math. Res. Not. 10J. Grabowski, S. Launois, Quantum cluster algebra structures on quantum Grassmannians and their quantum Schubert cells: the finite-type cases, Int. Math. Res. Not., 10, 2230-2262, (2011). Graded quantum cluster algebras and an application to quantum Grassmannians. J Grabowski, S Launois, Proc. Lon. Math. Soc. 109J. Grabowski, S. Launois, Graded quantum cluster algebras and an application to quantum Grassmannians, Proc. Lon. Math. Soc., 109, 697-732, (2014). A note on first order differential calculus on quantum principal bundles. P Hajac, Czechoslovak J. Phys. 47P. Hajac, A note on first order differential calculus on quantum principal bundles, Czechoslovak J. Phys., 47, 1139-1143, (1997). Projective module description of the q-monopole. P Hajac, S Majid, Comm. Math. Phys. 206P. Hajac, S. Majid, Projective module description of the q-monopole, Comm. Math. Phys., 206, 246-264, (1999). The locally finite part of the dual coalgebra of quantised irreducible flag manifolds. I Heckenberger, S Kolb, Proc. Lon. Math. Soc. 89I. Heckenberger, S. Kolb, The locally finite part of the dual coalgebra of quantised irreducible flag manifolds, Proc. Lon. Math. Soc., 89, 457-484, (2004). De Rham complex for quantized irreducible flag manifolds. I Heckenberger, S Kolb, J. Algebra. 305I. Heckenberger, S. Kolb, De Rham complex for quantized irreducible flag manifolds, J. Algebra, 305, 704-741, (2006). Differential forms via the Bernstein-Gelfand-Gelfand resolution for quantized irreducible flag manifolds. I Heckenberger, S Kolb, J. Geom. Phys. 57I. Heckenberger, S. Kolb, Differential forms via the Bernstein-Gelfand- Gelfand resolution for quantized irreducible flag manifolds. J. Geom. Phys., 57, 2316-2344, (2007). Holomorphic structures on the quantum projective line. M Khalkhali, G Landi, W Van Suijlekom, Int. Math. Res. Not. 4M. Khalkhali, G. Landi, W. van Suijlekom, Holomorphic structures on the quantum projective line, Int. Math. Res. Not., 4, 851-884, (2010). The homogeneous coordinate ring of the quantum projective plane. M Khalkhali, A Moatadelro, J. Geom. Phys. 61M. Khalkhali, A. Moatadelro, The homogeneous coordinate ring of the quan- tum projective plane, J. Geom. Phys., 61, 276-289, (2011). Noncommutative complex geometry of the quantum projective space. M Khalkhali, A Moatadelro, J. Geom. Phys. 61M. Khalkhali, A. Moatadelro, Noncommutative complex geometry of the quantum projective space, J. Geom. Phys., 61, 2436-2452, (2011). A Klimyk, K Schmüdgen, Quantum Groups and their Representations. Springer-VerlagA. Klimyk, K. Schmüdgen, Quantum Groups and their Representations, Springer-Verlag, 1997. Sur certaines structures fibrées complexes. J.-L Koszul, B Malgrange, Arch. Math. 9J.-L. Koszul, B. Malgrange, Sur certaines structures fibrées complexes, Arch. Math, 9, 102-109, (1958). Quantum deformations of flag and Schubert schemes. V Lakshmibai, N Reshetikhin, C. R. Acad. Sci. Paris. 313V. Lakshmibai, N. Reshetikhin, Quantum deformations of flag and Schubert schemes, C. R. Acad. Sci. Paris, 313, 121-126, (1991). Noncommutative Riemannian and spin geometry of the standard q-sphere. S Majid, Comm. Math. Phys. 256S. Majid, Noncommutative Riemannian and spin geometry of the standard q-sphere, Comm. Math. Phys., 256, 255-285, (2005). Projective quantum spaces. U Meyer, Lett. Math. Phys. 35U. Meyer, Projective quantum spaces, Lett. Math. Phys., 35, 91-97, (1995). Quantum homogeneous spaces with faithfully flat module structures. E F Müller, H.-J Schneider, Israel J. Math. 111E. F. Müller, H.-J. Schneider, Quantum homogeneous spaces with faithfully flat module structures, Israel J. Math., 111, 157-190, (1999). Finite dimensional representations of the quantum group GL q (n; C) and the zonal spherical functions on U q (n − 1)/U q (n). K Mimachi, M Noumi, H Yamada, Japan. J. Math. 19K. Mimachi, M. Noumi, H. Yamada, Finite dimensional representations of the quantum group GL q (n; C) and the zonal spherical functions on U q (n − 1)/U q (n), Japan. J. Math., 19, 31-80, (1993). Zonal spherical functions on the quantum homogeneous space SU q (n + 1)/SU q (n). K Mimachi, M Noumi, H Yamada, Proc. Japan Acad. Ser. A Math. Sci. 6K. Mimachi, M. Noumi, H. Yamada, Zonal spherical functions on the quantum homogeneous space SU q (n + 1)/SU q (n). Proc. Japan Acad. Ser. A Math. Sci., 6, 169-171, (1989). Quantum bundle description of quantum projective spaces. R Buachalla, Comm. Math. Phys. 316R.Ó Buachalla, Quantum bundle description of quantum projective spaces, Comm. Math. Phys., 316, 345-373, (2012). Noncommutative complex structures on quantum homogeneous spaces. R Buachalla, J. Geom. Phys. 99R.Ó Buachalla, Noncommutative complex structures on quantum homogeneous spaces, J. Geom. Phys., 99, 154-173, (2016). R Buachalla, arXiv:1602.08484Noncommutative Kähler structures on quantum homogeneous spaces. R.Ó Buachalla, Noncommutative Kähler structures on quantum homogeneous spaces, arXiv:1602.08484, (2016). R Buachalla, arXiv:1701.04394Nichols algebras and quantum principal bundles. R.Ó Buachalla, Nichols algebras and quantum principal bundles, arXiv:1701.04394, (2016). The noncommutative Kähler geometry of the full quantum flag manifold of C q. R Buachalla, P Somberg, SU 3 ], (in preparationR.Ó Buachalla, P. Somberg, The noncommutative Kähler geometry of the full quantum flag manifold of C q [SU 3 ], (in preparation). Coherent and quasicoherent sheaves for quantum projective space. R Buachalla, J Št&apos;ovíček, A Van Roosmallen, in preparationR.Ó Buachalla, J.Št'ovíček, A. van Roosmallen, Coherent and quasi- coherent sheaves for quantum projective space, (in preparation). Noncommutative torus bundles and generalised crossed products. R Buachalla, K Strung, in preparationR.Ó Buachalla, K. Strung, Noncommutative torus bundles and generalised crossed products, (in preparation). Quantum Linear Groups. B Parshall, J.-P Wang, AMS Publishing HouseB. Parshall, J.-P. Wang, Quantum Linear Groups, AMS Publishing House, 1991. J.-P Serre, Représentations linéaires et espaces homogénes kählériens des groupes de Lie compacts (d'aprés Armand Borel et André Weil), Séminaire Bourbaki. Paris: Soc. Math. France100J.-P. Serre, Représentations linéaires et espaces homogénes kählériens des groupes de Lie compacts (d'aprés Armand Borel et André Weil), Séminaire Bour- baki, Paris: Soc. Math. France, 100, 447-454, (1954). Soǐbel'man, On quantum flag manifolds. Y S , Func. Ana. Appl. 25Y. S. Soǐbel'man, On quantum flag manifolds, Func. Ana. Appl., 25, 225-227, (1992). Algebra of functions on quantum SU (n + 1) group and odd dimensional quantum spheres. Y S Soǐbel&apos;man, L L Vaksman, Alg. Anal. 2Y. S. Soǐbel'man, L. L. Vaksman, Algebra of functions on quantum SU (n + 1) group and odd dimensional quantum spheres, Alg. Anal., 2, 101-120, (1990). The quantum orbit method for generalised flag manifolds. J V Stokman, Math. Res. Lett. 10J. V. Stokman, The quantum orbit method for generalised flag manifolds, Math. Res. Lett., 10, 469-481, (2003). Relative Hopf modules -equivalences and freeness conditions. M Takeuchi, J. Algebra. 60M. Takeuchi, Relative Hopf modules -equivalences and freeness conditions, J. Algebra, 60, 452-471, (1979). Problems in the theory of quantum groups. S Wang, Banach Cen. Publ. 40S. Wang, Problems in the theory of quantum groups, Banach Cen. Publ., 40, 67-78, (1997). Differential calculus on compact matrix pseudogroups (quantum groups). S L Woronowicz, Comm. Math. Phys. 122S. L. Woronowicz, Differential calculus on compact matrix pseudogroups (quan- tum groups), Comm. Math. Phys., 122, 125 -170, (1989). Polskiej Akademii Nauk, ul.Śniadeckich 8, Warszawa 00-656, Poland e-mail: cmrozinski@impan. Instytut Matematyczny, pl e-mail: [email protected] Matematyczny, Polskiej Akademii Nauk, ul.Śniadeckich 8, Warszawa 00-656, Poland e-mail: [email protected] e-mail: [email protected]
[]
[ "Reflexivity of a Banach Space with a Countable Vector Space Basis", "Reflexivity of a Banach Space with a Countable Vector Space Basis" ]
[ "Michael Oser Rabin ", "Duggirala Ravi [email protected] " ]
[]
[]
All most all the function spaces over real or complex domains and spaces of sequences, that arise in practice as examples of normed complete linear spaces (Banach spaces), are reflexive. These Banach spaces are dual to their respective spaces of continuous linear functionals over the corresponding Banach spaces. For each of these Banach spaces, a countable vector space basis exists, which is responsible for their reflexivity. In this paper, a specific criterion for reflexivity of a Banach space with a countable vector space basis is presented.All most all the function spaces over real or complex domains and spaces of sequences, that arise in practice as examples of normed complete linear spaces (Banach spaces), are reflexive. The topological property of being reflexive is that these Banach spaces are dual to their respective spaces of continuous linear functionals over the corresponding Banach spaces. For a reflexive Banach space, the double dual space is isometrically isomorphic to the Banach space, and they are essentially the same spaces. For each of these Banach spaces, a countable vector space basis exists, which is responsible for their reflexivity. A vector space basis for a Banach space can be easily formulated, taking the analogy from a Riesz basis for a Hilbert space. However, these Banach spaces are not plainly L p -function or ℓ psequence spaces. For a Banach space with a countable vector space basis, the projection maps onto finite dimensional component subspaces are continuous and surjective, and hence open maps. The coefficient sequence of any vector in the Banach space, with respect to a basis, for which the dual basis functionals are normalized, can be shown to be bounded with respect to sup-norm, and the linear transformation mapping a vector to its coefficient sequence becomes continuous. The dual basis linear functionals become continuous and form a basis for the dual space. A specific criterion for reflexivity of a Banach space with a countable vector space basis is presented.
null
[ "https://arxiv.org/pdf/2202.12931v1.pdf" ]
247,158,257
2202.12931
5e9d8ca6d7707beb328e31c367ccdc9c1eb0fc8b
Reflexivity of a Banach Space with a Countable Vector Space Basis 28 Feb 2022 Michael Oser Rabin Duggirala Ravi [email protected] Reflexivity of a Banach Space with a Countable Vector Space Basis 28 Feb 2022Normed Linear SpacesBanach SpacesVector Space BasisWeak Topology All most all the function spaces over real or complex domains and spaces of sequences, that arise in practice as examples of normed complete linear spaces (Banach spaces), are reflexive. These Banach spaces are dual to their respective spaces of continuous linear functionals over the corresponding Banach spaces. For each of these Banach spaces, a countable vector space basis exists, which is responsible for their reflexivity. In this paper, a specific criterion for reflexivity of a Banach space with a countable vector space basis is presented.All most all the function spaces over real or complex domains and spaces of sequences, that arise in practice as examples of normed complete linear spaces (Banach spaces), are reflexive. The topological property of being reflexive is that these Banach spaces are dual to their respective spaces of continuous linear functionals over the corresponding Banach spaces. For a reflexive Banach space, the double dual space is isometrically isomorphic to the Banach space, and they are essentially the same spaces. For each of these Banach spaces, a countable vector space basis exists, which is responsible for their reflexivity. A vector space basis for a Banach space can be easily formulated, taking the analogy from a Riesz basis for a Hilbert space. However, these Banach spaces are not plainly L p -function or ℓ psequence spaces. For a Banach space with a countable vector space basis, the projection maps onto finite dimensional component subspaces are continuous and surjective, and hence open maps. The coefficient sequence of any vector in the Banach space, with respect to a basis, for which the dual basis functionals are normalized, can be shown to be bounded with respect to sup-norm, and the linear transformation mapping a vector to its coefficient sequence becomes continuous. The dual basis linear functionals become continuous and form a basis for the dual space. A specific criterion for reflexivity of a Banach space with a countable vector space basis is presented. 3. If at least one of the index sets I and J is finite, the projection operators P I and P J , defined by P I (x) = y and P J (x) = z, for x ∈ B, with y ∈ L(I) and z ∈ L(J), such that x = y + z, are both well defined and continuous. Proof. If I = ∅ or J = ∅, the statements vacuously hold. For a singleton set {i}, for some i ∈ N, L({i}) = {cξ i : c ∈ F}, and cξ i B = |c| · ξ i B , whence L({i}) is topologically closed in B with respect to · B . If x = j∈N c j ξ j ∈ B, for some scalars c j ∈ N, then y = c i ξ i ∈ L({i}), z = j∈N \ {i} c j ξ j ∈ L(N \ {i}) , and the expression x = y + z is uniquely determined. Therefore, L(N \ {i}) is a closed linear subspace of B, with respect to · B , as well. Now L(I) = j∈J L(N \ {j}) and L(J) = i∈I L(N \ {i}), hence both are closed. The remaining part is obvious. For a Banach space B, with a countable basis {ξ i : i ∈ N} ⊂ B, the algebraic dual basis is the set of linear functionals {ξ i : i ∈ N}, defined by their action on the basis vectors by the condition thatξ i (ξ j ) = δ i, j , for i, j ∈ N. Proposition 1 (item 3) shows thatξ i is continuous, hence bounded, i.e., 1 ≤ ξ i B < ∞, for every i ∈ N. Let η i = ξ i B ξ i andη i =ξ i ξ i B , for i ∈ N. Then, η i B = 1 andη i (η j ) = δ i,x = i∈N c i η i , for some c i ∈ F, where i ∈ N. Then, sup i∈N |c i | ≤ x B . Proof. LetT be the formal linear mapping defined byT (d) = i∈N d iξi , for d = (d 1 , d 2 , d 3 , . . .) ∈ ℓ 1 (N, F).inB. For x = i∈N c i η i , for some scalars c i ∈ F, i ∈ N, the natural embedding of x inB isx = i∈N c iηi , which is an isometric isomorphism, by Hahn-Banach theorem, i.e., x B = x B . If x B ≤ 1, then |x T (e i ) | = |x(η i )| = |c i | ≤ 1, for every i ∈ N. By the linearity of of the natural embedding, the contention follows. Reflexivity of a Countable Basis Banach Space Let B be a Banach space with a countable basis {ξ i : i ∈ N}, andB be the dual space of with B generated by the dual basis {ξ i : i ∈ N}. In this subsection, three specific assumptions regarding structure of B andB are assumed to hold good, for the theorem of this section. Assumption 1. For every x ∈ B, with x = i∈N c i ξ i , for some scalars c i ∈ F, if y n = n i=1 c i ξ i , then y n B ≤ x B , for every n ∈ N. The assumption just stated can hold, for a very large collection of Banach spaces. It is particularly a nondecreasing norm, with addition of more terms : y n B ≤ y n+1 B , for each n ∈ N. In general, for arbitrary scalars c i ∈ F, with |c i | bounded and y n = n j=1 c j ξ j , where i, n ∈ N, it may not be true that { y n B : n ∈ N} is a bounded set of nonnegative real numbers, even though y n B is nondecreasing, for n ∈ N. Assumption 2. For any scalars c i ∈ F, where i ∈ N, with y n = n i=1 c i ξ i , if y n B ≤ b, for some fixed b > 0 and every n ∈ N, then i∈N c i ξ i ∈ B, and if x = i∈N c i ξ i ∈ B, then x B ≤ b. The assumption just stated can hold, for a very large collection of Banach spaces. By the embedding of y n = n i=1 c i ξ i ∈ B into the double dualŷ n = n i=1 c iξi ∈B, followed by an appeal to the compactness of the closed convex unit sphere ofB, centered at the origin, with respect to weak * topology ofB, if y n B ≤ b, then the sequenceŷ n = n i=1 c iξi converges toẑ = i∈N c iξi with ẑ B ≤ b. The assumption implies that there exists x ∈ B, such that the natural embedding of x intoB isẑ. Assumption 3. For everyf ∈B, withf = i∈N d iξi , for some scalars d i ∈ F, it holds that for n ∈ N. If the assumption just stated holds, thenB is the topological closure ofM, with respect to · B . The assumption is satisfied, if the coefficient sequence is in ℓ p (N, F), for some finite p ≥ 1, possibly depending onf , i.e., 1 ≤ p < ∞. The following is the main result. , centered at the origin. lim n→∞ ∞ i=n+1 d iξi B = 0(1) The product topology on S(1) is generated by the basic open sets    i∈N c i ξ i ∈ S(1) : c i ∈ U i , i ∈ N    where U i is an open subset of V i , such that U i = V i ,Γ n =      (c 1 , ... , c n ) ∈ F n : i∈N c i ξ i B ≤ 1 , for some scalars c i ∈ F and i ≥ n + 1      for n ∈ N, is a closed subset of n i=1 V i . Now, the set A n = Γ n × ∞ i=n+1 V i is a closed subset of i∈N V i , with respect to the product topology, for every n ∈ N. If i∈N c i ξ i B ≤ 1, then n i=1 c i ξ i B ≤ 1, by Assumption 1, and therefore, T S(1) ⊆ A n , for every n ∈ N. Conversely, if (c 1 , ... , c n , c n+1 , . . .) ∈ A n , for every n ∈ N, then n i=1 c i ξ i B ≤ 1, for every n ∈ N, by Assumption 1, and i∈N c i ξ i ∈ B, with i∈N c i ξ i B ≤ 1, by Assumption 2. Thus, n∈N A n = T S (1) and T S(1) is a closed subset of i∈N V i , hence compact with respect to the product topology. The weak topology on B is generated by subbasic open sets V(f , z, ρ), wheref ∈B, z ∈ B and ρ > 0, defined by the condition z, ρ). V(f , z, ρ) = y ∈ B : |f (y) −f (z)| < ρ For any x ∈ V(f , z, ρ), with δ = δ(f , z, ρ) = ρ−|f (z)−f (x)|, by triangle inequality, |f (y)−f (z)| ≤ |f (y)−f (x)| + |f (x) −f (z)|, if y ∈ V(f , x, δ), then |f (y) −f (z)| ≤ δ + |f (x) −f (z)| < ρ, and y ∈ V(f , z, ρ), hence V(f , x, δ) ⊆ V(f , Now, for the basic open set V(f , x, δ), let x = i∈N c i ξ i andf = i∈N d i ξ i , for some scalars c i , d i ∈ F, for i ∈ N. Sincef (x) = i∈N c i d i , it follows that there exists N (f , δ) ∈ N, such that ∞ i=n+1 d iξi B < δ 4 , for every n ≥ N (f , δ). Letĝ = n i=1 d iξi , for some fixed n ≥ N (f , δ). If |ĝ(y) −ĝ(x)| < δ 4 , then |f (y) −f (x)| ≤ |f (y) −ĝ(y)| + |ĝ(y) −ĝ(x)| + |ĝ(x) −f (x)| < δ 4 x B + y B + δ 4 < δ, whenever x, y ∈ S(1). Thus, V(ĝ, x, δ 4 ) S(1) ⊆ V(f , z, ρ) S(1), for x ∈ V(f , z, ρ) S(1), with δ = δ(f , z, ρ) = ρ − |f (z) −f (x)|, and some appropriately chosenĝ ∈M. In the product topology, all the functionals inM remain continuous, and the product topology on S(1) coincides with the weak topology on S(1). Thus, S(1) is compact in the weak topology. Observation The compactness of the set i∈N V i can be proved without using the axiom of choice. It is also possible to take B to be the dual of another Banach space N , and apply the criterion to infer the reflexivity of N . Miscellany and Applications V(f , x, ǫ) = y ∈ B : |f (y) −f (x)| < ǫ . Let x = i∈N c i ξ i ∈ B, for some scalars c i ∈ F, for i ∈ N. Let y n = n i=1 c i ξ i , so that y n ∈ L({1, ..., n}), for n ∈ N. For everyf = i∈N d iξi , for any scalars d i ∈ F, for i ∈ N, and ǫ > 0, there exists N (f , x, ǫ) ∈ N, Proof. M Q is strongly dense in M. such that |f (y n ) −f (x)| = | i≥n+1 c i d i | < ǫ, For a property that depends continuously with respect to the weak topology, in order to show that the property holds for all of B, it suffices to show that the same holds for M. Conclusions The projection maps and dual basis linear functionals a Banach space with a countable vector space basis are shown to be continuous open maps. The dual basis linear functionals form a basis for the dual spacethe dual space, and the double dual basis linear functionals form a basis for the double dual space. A specific criterion for reflexivity is presented. · B . The double dual, or bidual, is denoted byB, with its bidual norm · B . Proposition 2.1. Let B be a Banach space, with norm · B , and let {ξ i : i ∈ N} ⊂ B be a basis for B. Let I ⊂ N and J = N \ I, and let L(I) and L(J) be the subspaces spanned by {ξ i : i ∈ I} and {ξ j : j ∈ J}, respectively. The following statements hold: 1. Both L(I) and L(J) are closed linear subspaces of B, with respect to · B . 2. If at least one of the index sets I and J is finite, then B = L(I) ⊕ L(J), i.e., every vector x ∈ B can be expressed as the sum of vectors x = y + z, for some y ∈ L(I) and z ∈ L(J), uniquely. For m, n ∈ N, and g n = n i=1 d iξi , the estimate g m+n − g n B i |, shows that {g i : i ∈ N} is a Cauchy sequence inB, andT (d) ∈B, with the norm ofT , as a linear transformation from ℓ 1 (N, F) intoB, being at most 1. LetT :B → ℓ ∞ (N, F) be the adjoint linear transformation ofT . The linear transformation norm ofT coincides with that ofT , andT becomes continuous. Letη i be the natural embedding of η i LetM = n∈NL ({1, ..., n}), whereL({1, ..., n}) is the closed linear subspace generated by {ξ i : 1 ≤ i ≤ n}, Theorem 2.3. (Rabin) Let B be a Banach space with a countable basis {ξ i : i ∈ N}. If the three assumptions stated above hold, then B is reflexive, i.e., the double dualB is isometrically isomorphic to B. Proof. Let S(r) = {x ∈ B : x B < r}, for r > 0, be the open subset of B, of radius r, centered at the origin, and S(r) = {x ∈ B : x B ≤ r}, for r > 0, be the closed subset of B, of radius r, centered at the origin. Let V i = {d i ∈ F : |d i | < ξ i B } be the open set of F, of radius ξ i B, centered at the origin, and V i ={d i ∈ F : |d i | ≤ ξ i B} be the closed set of F, of radius ξ i B for all but possibly finitely many indexes i ∈ N.Let T : S(1) → i∈N V i be the on-to-one continuous function mapping x = i∈N c i ξ i to the coefficient sequence (c 1 , c 2 , . . .) ∈ i∈N V i . Then, the basic open set of S(1) as just defined is the set T −1 i∈N U i , where i∈N U i is an open subset of the countable infinite product i∈N V i . There are two parts in the proof: in the first part, T S(1) is shown to be closed in i∈N V i (hence S(1) becomes compact with respect to the product topology), and in the second part, the weak topology of S(1) is shown to coincide with the product topology of S(1). Part 1. Since the projections onto finite dimensional subspaces are continuous and also open maps, the set A vector space basis is very important for a vector space. For an infinite dimensional topological vector space, a precise formulation of a vector space basis can be expressed, extending the definition of linear independence for linear combinations of finitely many vectors to series. For a Banach space with a countable vector space basis, some more interesting and useful facts can be proved.Theorem 3.1. (Weak Denseness of Finite Linear Combinations) Let B be a Banach space with a countable basis {ξ i : i ∈ N}, and M = n∈N L({1, ..., n}), where L({1, ..., n}) is the closed linear subspace generated by {ξ i : 1 ≤ i ≤ n}, for n ∈ N. Then, M is weakly dense in B. Proof. The weak topology on B is generated by subbasic open sets V(f , x, ǫ), wheref ∈B, x ∈ B and ǫ > 0, where for every n ≥ N (f , x, ǫ). Now, for a basic open subset V(f 1 , x, ǫ 1 ) ∩ ... ∩ V(f m , x, ǫ m ), for some m ∈ N, let δ = min{ǫ i : 1 ≤ i ≤ m} and ν = max{N (f i , x, δ) : 1 ≤ i ≤ m}. Then, |f i (y n ) −f i (x)| < δ, for 1 ≤ i ≤ m, for every n ≥ ν. Thus, every weak basic open set centered at x intersects M, and x is a limit point of M with respect to the weak topology. Theorem 3.2. (Weak Denseness of a Countable Set) Let B be a Banach space with a countable basis {ξ i : i ∈ N}, and M Q = n∈N L Q ({1, ..., n}), where L Q ({1, ..., n}) is the countable subset obtained by collecting linear combinations of vectors in {ξ i : 1 ≤ i ≤ n}, with rational number coordinates, for n ∈ N. Then, M Q is weakly dense in B. Proof. M Q is strongly dense in M, where M is as defined in Theorem 3.1 and weakly dense in B. Corollary 3 .2. 1 . 31Let B, M and M Q be as in Theorems 3.1 and 3.2. For any continuous linear functionalĝ defined on M Q , such that |ĝ(x)| ≤ x B , for x ∈ M Q , there is a unique continuous extensionĥ ofĝ to M, andĥ satisfies |ĥ(x)| ≤ x B , for every x ∈ M. j , for i, j ∈ N. Moreover, {η i : i ∈ N} forms a basis forB : iff ∈B, then, by the algebraic action off on the vector space basis {η i : i ∈ N} for B, it holds thatf = i∈Nf (η i )η i , and the uniqueness of the algebraic expression is obvious.Proposition 2.2. Let B be a Banach space, with a countable basis, {η i : i ∈ N}, such that {η i : i ∈ N} is a normalized basis forB, i.e., η i B = 1 and {η i : i ∈ N} forms a basis forB, and let A Course in Functional Analysis. John B Conway, Springer-Verlag IncNew YorkJohn B. Conway, "A Course in Functional Analysis", Springer-Verlag Inc., New York, 1990 Real Analysis. Serge Lang, Addison-Wesley Publishing Company IncSerge Lang, "Real Analysis", Addison-Wesley Publishing Company Inc., 1983 Real and Functional Analysis. Serge Lang, Springer-Verlag IncNew YorkSerge Lang, "Real and Functional Analysis", Springer-Verlag Inc., New York, 1993 An Introduction to Banach Space Theory. Robert E Megginson, Springer-Verlag Inc199New YorkRobert E. Megginson. An Introduction to Banach Space Theory. Springer-Verlag Inc. New York, 199 Functional Analysis. Walter Rudin, McGraw-Hill IncWalter Rudin, "Functional Analysis", McGraw-Hill Inc., 1991 Introduction to Topology and Modern Analysis. George F Simmons, McGraw-Hill Inc., and Printed by Robert E. Krieger Publishing Company IncGeorge F. Simmons, "Introduction to Topology and Modern Analysis", McGraw-Hill Inc., and Printed by Robert E. Krieger Publishing Company Inc., 1963 Introduction to Metric and Topological Spaces. Wilson A Sutherland, Oxford University PressWilson A. Sutherland, "Introduction to Metric and Topological Spaces", Oxford University Press, 2009
[]
[ "Nonequilibrium Steady State in Open Quantum Systems: Influence Action, Stochastic Equation and Power Balance", "Nonequilibrium Steady State in Open Quantum Systems: Influence Action, Stochastic Equation and Power Balance" ]
[ "J.-T Hsiang ", "B L Hu [email protected] ", "\nDepartment of Physics\nCenter for Theoretical Physics\nNational Dong Hwa University\nTaiwan\n", "\nJoint Quantum Institute and Maryland Center for Fundamental Physics\nFudan University\nShanghaiChina\n", "\nUniversity of Maryland\n20742College ParkMarylandUSA\n" ]
[ "Department of Physics\nCenter for Theoretical Physics\nNational Dong Hwa University\nTaiwan", "Joint Quantum Institute and Maryland Center for Fundamental Physics\nFudan University\nShanghaiChina", "University of Maryland\n20742College ParkMarylandUSA" ]
[]
The existence and uniqueness of a steady state for nonequilibrium systems (NESS) is a fundamental subject and a main theme of research in statistical mechanics for decades. For Gaussian systems, such as a chain of harmonic oscillators connected at each end to a heat bath, and for anharmonic oscillators under specified conditions, definitive answers exist in the form of proven theorems. Answering this question for quantum many-body systems poses a challenge for the present. In this work we address this issue by deriving the stochastic equations for the reduced system with self-consistent backaction from the two baths, calculating the energy flow from one bath to the chain to the other bath, and exhibiting a power balance relation in the total (chain + baths) system which testifies to the existence of a NESS in this system at late times. Its insensitivity to the initial conditions of the chain corroborates to its uniqueness.The functional method we adopt here entails the use of the influence functional, the coarse-grained and stochastic effective actions, from which one can derive the stochastic equations and calculate the average values of physical variables in open quantum systems. This involves both taking the expectation values of arXiv:1405.7642v2 [cond-mat.stat-mech] 1 Jun 2014 quantum operators of the system and the distributional averages of stochastic variables stemming from the coarse-grained environment. This method though formal in appearance is compact and complete. It can also easily accommodate perturbative techniques and diagrammatic methods from field theory. Taken all together it provides a solid platform for carrying out systematic investigations into the nonequilibrium dynamics of open quantum systems and quantum thermodynamics.
10.1016/j.aop.2015.07.009
[ "https://arxiv.org/pdf/1405.7642v2.pdf" ]
118,456,379
1405.7642
4c9ea713adab5e888e3c3ce9c65170639f570180
Nonequilibrium Steady State in Open Quantum Systems: Influence Action, Stochastic Equation and Power Balance J.-T Hsiang B L Hu [email protected] Department of Physics Center for Theoretical Physics National Dong Hwa University Taiwan Joint Quantum Institute and Maryland Center for Fundamental Physics Fudan University ShanghaiChina University of Maryland 20742College ParkMarylandUSA Nonequilibrium Steady State in Open Quantum Systems: Influence Action, Stochastic Equation and Power Balance Preprint submitted to Annals of Physics May 28, 2014Nonequilibrium steady stateOpen quantum systemsInfluence functional formalismstochastic density matrixLangevin equationnoise and fluctuationsEnergy flow and power balance relationsQuantum transportquantum thermodynamics The existence and uniqueness of a steady state for nonequilibrium systems (NESS) is a fundamental subject and a main theme of research in statistical mechanics for decades. For Gaussian systems, such as a chain of harmonic oscillators connected at each end to a heat bath, and for anharmonic oscillators under specified conditions, definitive answers exist in the form of proven theorems. Answering this question for quantum many-body systems poses a challenge for the present. In this work we address this issue by deriving the stochastic equations for the reduced system with self-consistent backaction from the two baths, calculating the energy flow from one bath to the chain to the other bath, and exhibiting a power balance relation in the total (chain + baths) system which testifies to the existence of a NESS in this system at late times. Its insensitivity to the initial conditions of the chain corroborates to its uniqueness.The functional method we adopt here entails the use of the influence functional, the coarse-grained and stochastic effective actions, from which one can derive the stochastic equations and calculate the average values of physical variables in open quantum systems. This involves both taking the expectation values of arXiv:1405.7642v2 [cond-mat.stat-mech] 1 Jun 2014 quantum operators of the system and the distributional averages of stochastic variables stemming from the coarse-grained environment. This method though formal in appearance is compact and complete. It can also easily accommodate perturbative techniques and diagrammatic methods from field theory. Taken all together it provides a solid platform for carrying out systematic investigations into the nonequilibrium dynamics of open quantum systems and quantum thermodynamics. Introduction Nonequilibrium stationary states (NESS) play a uniquely important role in many-body systems in contact with two or more heat baths at different temperatures, similar in importance to the equilibrium state of a system in contact with one heat bath which is the arena for the conceptualization and utilization of the canonical ensemble in statistical thermodynamics. The statistical mechanics [1] and thermodynamics [2] of open systems 1 in NESS have been the focus of 1 Defined in a broader sense (A) an open system is one where some of its information is difficult or impossible to obtain or retrieve, or is coarse-grained away by design or by necessity, both in theoretical and practical terms, the latter referring to the limited capability of the measuring agent or the precision level of instrumentation. The more specific sense (B) used in nonequilibrium statistical mechanics [3] emphasizing the influence of a system's environment on its dynamics goes as follows: Start with a closed system comprising of two subsystems S 1 and S 2 with some interaction between the two, one can express the dynamics of S 1 including that of S 2 in terms of an integral differential equation. If one subsystem S 2 contains an overwhelmingly large number of degrees of freedom than the other, we call S 2 an environment E of S 1 . The influence of E on S 1 is called the backaction. When the environment investigation into the important features of nonequilibrium processes of both theoretical interests, such as providing the context for the celebrated classical and quantum fluctuation theorems, and acting as the fountainhead of a new field known as quantum thermodynamics [7,8], and a wide range of practical applications, extending from physics and chemistry to biology. For classical many body systems the existence and uniqueness of NESS is a fundamental subject and a main theme of research by mathematical physicists in statistical mechanics for decades. For Gaussian systems (such as a chain of harmonic oscillators with two heat baths at the two ends of the chain) [9] and anharmonic oscillators under general conditions [10] there are definitive answers in the form of proven theorems. Answering this question for quantum many body systems is not so straightforward and poses a major challenge for the present. For quantum many body systems a new direction of research is asking whether closed quantum systems can come to equilibrium and thermalize [11]. Equilibration of open quantum systems [12] with strong coupling to a heat bath also shows interesting new features [13]. Transport phenomena in open spin systems has also seen a spur of recent activities [14]. Noteworthy in the mathematical properties is the role played by symmetry in the nonequilibrium dynamics of these systems [15]. Issues Our current research program on the nonequilibrium dynamics of quantum open systems attempts to address four sets of issues with shared common basis pertaining to NESS: can be characterized by thermodynamic parameters it is called a heat bath at temperature T or a matter reservoir with chemical potential µ, etc. When a great deal of microscopic information of S 2 is discarded or coarse-grained, as is the case when it is described only by a few macroscopic parameters, the effect of the environment can be characterized by noise and fluctuations [4], and their backaction on the system show up in the "reduced" system's dynamics as dissipation [5], diffusion (quantum diffusion is responsible for the decoherence [6] of quantum phase information). An open system thus carries the influence or the backaction of its environment. Oftentimes the main task in the treatment of open systems is to find the influence of the environment on the subsystem. Defined in this sense (B) it is synonymous with "reduced" system -with the burden of explanation now shifted to what "reduced" entails operationally. A. The approach to NESS. Instead of seeking mathematical proofs for these basic issues which are of great importance but not easy to come by it is helpful to see how these systems evolve in time and find out under what conditions one or more NESS may exist. For this we seek to derive the quantum stochastic equations (master, Langevin, Fokker-Planck) for prototypical quantum open systems (e.g., for two oscillators in contact with two heat baths and extension to chains and networks) so one can follow their dynamics explicitly, to examine whether NESS exist at late times, by checking if energy fluxes reach a steady state and whether under these conditions a energy flow (power) balance relation exist. This is probably the most explicit demonstration of the NESS possible. In addition, the stochastic equations can be used to calculate the evolution of key thermodynamic and quantum quantities such as entropy for equilibration / thermalization considerations and quantum entanglement for quantum information inquires. B. Quantum transport: Since the seminal paper of [16], the role of nonlinearity and nonintegrability in the violation of Fourier law [17] has been explored in a wide variety of representative classical systems with different nonlinear interactions, such as the Fermi-Pasta-Ulam (FPU) models [18] or the Frenkel-Kontorova (FK) model [19] and baths of different natures [17,20,21]. For the original papers and current status we refer to two nice reviews [20,21]. For applications of heat conduction to phononics, see [22]. (Note also the recent work on anomalous heat diffusion [23]). Numerical results are a lot more difficult to come by for quantum many body systems, thus analytic results, even perturbative, for weak nonlinearity, are valuable. Finding solutions to the quantum stochastic equations have been attempted for simple systems like a quantum anharmonic oscillator chain coupled to two heat baths at the ends or harmonic oscillators coupled nonlinearly, each with its own heat bath (namely, with or without pinning potentials). The related problem of equilibration of open quantum systems with nonlinearity remains an open issue. Even at the classical level this is not a straightforward issue. The existence of breather modes [24] and 'strange' behavior [25] have been noted. Nonlinearity in quantum system adds a new dimension bearing some similarity or maybe sharing same origins with the issue of how to decipher scars of classical chaos in corresponding quantum systems. C. Fluctuation Relations: Entropy Production in nonequilibrium system and the role of large deviations in currents; Fluctuation Theorems both in the Gallavoti-Cohen vein [26,27,28,29,30] and the Jarzynski-Crook relations [31,32,33,34]. Much work in this field is formulated in the context of nonequilibrium thermodynamics. The use of microphysics models such as quantum Brownian motion and open quantum systems techniques, including even decoherence history concepts (for the definition of trajectories), such as used in [35] (see references therein) can provide some new perspective into these powerful relations. D. Quantum entanglement at finite temperature [36,37] It is generally believed that at high temperatures thermal fluctuations will overshadow quantum entanglement. This problem was explored by Audenaert et al [38] who work out exact solutions for a bisected closed harmonic chain at ground and thermal states, by Anders [39] for a harmonic lattice in 1-3 dimensions and derived a critical temperature above which the quantum system becomes separable. Anders and Winters [40] further provided proof of theorems and a phase diagram on this issue. Entanglement of a two particle Gaussian state interacting with a single heat bath is investigated recently in [41]. What makes this issue interesting is the suggestion [42] that quantum entanglement can persist at high temperature in NESS. Recently [43] showed by a coupled oscillator model that no thermal entanglement is found in the high temperature limit. However, the existence of quantum entanglement in NESS for driven systems as claimed by the experiment of Galve et al [44], and the calculations for spin systems [45], remains an open issue. We want to settle this issue theoretically, at least for harmonic oscillator systems, with the help of the formalism set up here. Our first batch of papers will focus on Issues A and B, which we describe below. A parallel batch will address Issues C and D in later expositions. Models and Methodology The generic quantum open system we study is a simple 1-dimensional quantum oscillator chain, with the two end-oscillators interacting with its own heat bath, each described by a scalar field. The two baths combined make up the environment. We begin our analysis with two oscillators linearly coupled and explore whether a NESS exists for this open system at late times. We do this by solving for the stochastic effective action and the Langevin equations, which is possible for a Gaussian system. From this we can derive the expressions for the energy flow from one bath to another through the system. This is the reason why we begin our study with this model, since in addition to its generic character and versatility, it provides a nice platform for explaining the methodology we adopt. For the sake of clarity we will work out everything explicitly, so as to facilitate easier comparison with other approaches. We name two papers which are closest to ours, either in the model used or in the concerns expressed: the paper by Dhar, Saito and Hanggi [46] uses the reduced density matrix approach to treat quantum transport, while that of Ghsquiere, Sinayskiy and Petruccione [43] uses master equations to treat entropy and entanglement dynamics. Similar in spirit is an earlier paper by Chen, Lebowitz and Liverani [47] which use the Keldysh techniques in a path integral formalism to consider the dissipative dynamics of an anharmonic oscillator in a bosonic heat bath, and recent papers of Zoli [48], Aron et al [49] for instance. The main tools in nonequilibrium quantum many-body dynamics such as the closed time path (CTP, in-in, or Schwinger-Keldysh) [50] effective action, the two-particle irreducible (2PI) representation, the large N expansion were introduced for the establishment of quantum kinetic field theory a quarter of centuries ago [51] and perfected along the way [52,53,54]. Applications to problems in atomic-optical [55], condensed matter [56], nuclear-particle [57] and gravitation-cosmology [58] have been on the rise in the last decade. A description of quantum field theoretic methods applied to nonequilibrium processes in a relativistic setting can be found in [59]. By contrast, there is far less applications of these well-developed (powerful albeit admittedly heavy-duty) methodology for the study of nonequilibrium steady state in open quantum systems in contact with two or more baths. We make such an attempt here for the exploration of fundamental issues of nonequilibrium statistical mechanics for quantum many-body systems and to provide a solid micro-physics foundation for the treatment of problems in quantum thermodynamics which we see will span an increasingly broader range of applications in physics, chemistry and biology. Below we explain our methodology and indicate its advantage when appropriate, while leaving the details of how it is related to other approaches in the sections proper. The mathematical framework of our methodology is the path-integral influence functional formalism [60,61,62], under which the influence action [63], the coarse-grained effective action [64] and the stochastic effective action [65,66] are defined. The stochastic equations such as the master equation (see, e.g., [63]) and the Langevin equations (see, e.g., [67]) can be obtained from taking the functional variations of these effective actions. There are two main steps in this approach we devised: 1. The derivation of the influence action S IF and coarse-grained effective actions S CG for the reduced system (composed of two linearly interacting oscillators, then extended to a harmonic chain) obtained by coarse-graining or integrating over the environmental variables (composed of two baths, coupled to the two end oscillators of a harmonic chain). The baths are here represented by two scalar fields [68,69]. Noise does not appear until the second stage. This material is contained in Sec. 2. 2. For Gaussian systems the imaginary part of the influence action can be identified via the Feynman-Vernon integral identity with a classical stochastic force (see, e.g., [70,71]). Expressing the exponential of the coarse-grained effective action S CG in the form of a functional integral over the noise distribution, the stochastic effective action S SE is identified as the exponent of the integrand. Taking the functional variation of S SE yields a set of Langevin equations for the reduced system. Alternatively one can construct the stochastic reduced density matrix. The averages of dynamical variables in a quantum open system includes taking the expectation values of the canonical variables as quantum operators and the distributional averages of stochastic variables as classical noises. We illustrate how to calculate these quantities with both methods in Sec. 3 and 4. Our methodology includes as subcomponents the so-called reduced density matrix approach (e.g., [46,72]), the nonequilibrium Green function (NEGF) [73,74,75]), the quantum master equation and quantum Langevin equation approaches. It is intimately related to the closed-time-path, Schwinger-Keldysh or in-in effective action method, where one can tap into the many useful field theoretical and diagrammatic methods developed. The stochastic equations of motion 2 obtained from taking the functional variation of the stochastic effective action enjoy the desirable features that a) they are real and causal, which guarantee the positivity of the reduced density matrix, and b) the backaction of the environment on the system is incorporated in a self-consistent way. These conditions are crucial for the study of nonequilibrium quantum processes including the properties of NESS. The physical question we ask is whether a NESS exists at late times. Since we have the evolutionary equations and their solutions for this system we can follow the quantum dynamics (with dissipation and decoherence) of physical quantities under the influence of the environment (in the form of two noise sources). We describe the behavior of the energy flux and derive the balance relations in Sec. 4. Paper II [77] will treat the same system but allow for nonlinear interaction between the two oscillators. For this we shall develop a functional perturbation theory for treating weak nonlinearity. Entanglement at high temperatures in quantum systems in NESS and equilibration in a quantum system with weak nonlinearity are the themes of planned Papers III, IV respectively [78,79]. Writing the reduced density operator in terms of the stochastic effective action + the probability functional, one can compute the energy current between each oscillator and its private bath in the framework of the reduced density operator. This functional method provides a useful platform for the construction of a perturbation theory, which we shall show in the next paper, in treating weakly nonlinear cases. Alternatively, from the influence action one can derive the Langevin equation describing the dynamics of the reduced system under the influence of a noise obtained from the influence functional. This is probably a more intuitive and transparent pathway in visualizing the energy flow between the system and the two baths. Features The fundamental solutions which together determine the evolutionary operator of the reduced density operator all have an exponentially decaying factor. This has the consequences that (1) the dependence on the system's initial conditions will quickly become insignificant as the system evolves in time. Because of the exponential decay, only during a short transient period are the effects of initial conditions observable. At late time, the behavior of the system is governed by the baths. In other words, for Gaussian initial states, the time evolution of the system is always attracted to the behavior controlled by the bath, independent of the initial conditions of the system. (2) the physical variables of interest here tend to relax to -becoming exponentially close to -a fixed value in time. For example, the velocity variance will asymptotically go to a constant on a time scale longer than the inverse of the decay constant. In addition all oscillators O k along the chain have the same relaxation time scale. The energy currents between B 1 -O 1 , or O k -O k+1 , or O n -B n in general all evolve with time, and will depend on the initial conditions. However, after the motion of the oscillators along the chain is fully relaxed, the energy currents between components approach time-independent values, with the same magnitude. This time-independence establishes the existence of an equilibrium steady state. Its insensitivity to the initial conditions of the chain testifies to its uniqueness, the same magnitude ensures there is no energy buildup in any component of the open system: Heat flows from one bath to another via the intermediary of the subsystems. To our knowledge, unlike for classical harmonic oscillators where mathematical proofs of the existence and uniqueness of the NESS have been provided, there is no such proofs for quantum harmonic systems. It is perhaps tempting to make such an assumption drawing the close correspondence between quantum and classical Gaussian systems this is what most authors tacitly assume (e.g. [46]). We have not provided a mathematical proof of the existence and uniqueness of a NESS for this generic system under study. What we have is an explicit demonstration, drawing our conclusions from solving the dynamics of this system under very general conditions -the full time evolution of the nonequilibrium open system is perhaps more useful for solving physical problems. Results 1. We have obtained the full nonequilibrium time evolution of the reduced system (in particular, energy flow along a harmonic chain between B 1 - 3. We have demonstrate that the NESS current is independent of the initial (Gaussian) configurations of the chain after the transient period. It thus implies uniqueness. 4 For comparison, [43] made a weak coupling assumption when working at low temperatures. For Gaussian systems one can solve the full dynamics at least formally in the strong coupling regime -this is well known, see, e.g., [80], and is assumed so in [46]. However, when explicit results are desired, one often has to make some compromised assumptions, such as weak coupling between the oscillators and their baths, as done in the last section of [46]. O 1 , or O k -O k+1 , or O n -B We have obtained a Landauer-like formula in • Eq. (4.56) for a two-oscillator chain, and • Eq. (5.10) for an n-oscillator chain 5. In particular for the case of two oscillators (n = 2), we define heat conductance (4.61), and have shown that • for sufficiently large n, the NESS current oscillates but converges to a constant independent of n. Coarse-Grained Effective Action for Open Quantum Systems Consider a quantum system S = S 1 + S 2 made up of two subsystems S 1,2 each consisting of a harmonic oscillator O 1,2 interacting with its own bath B 1,2 at temperatures T 1,2 respectively (assume T 1 > T 2 ). The system by itself is closed while when brought in contact with heat baths becomes open, owing to the overwhelming degrees of freedom in the baths which are inaccessible or unaccountable for. The situation of one oscillator interacting with one bath under the general theme of quantum Brownian motion (QBM) has been studied for decades and is pretty well-understood, extending to non-Markovian dynamics in a general environment. Here we wish to extend this study to two such identical configurations, adding a coupling between O 1 and O 2 which is assumed to be linear in this paper and nonlinear in subsequent papers. Let's call S the combined system of two coupled quantum Brownian oscillators each interacting with its private bath. Assume that each oscillator is isolated from the other thermal bath, thus there is no direct contact between O 1 and B 2 , but there is indirect influence through O 1 's coupling to O 2 and its interaction with B 2 . Assume also that initially the wave functions of the oscillators do not overlap and that the baths do not occupy the same spacetime region 5 . The physics question we are interested in is whether a nonequilibrium steady state exists in S and how it comes about, in terms of its time evolution. As a useful indicator we wish to describe the energy flow in the three segments: B 1 → O 1 → O 2 → B 2 . It is not clear a priori why energy should flow in a fixed direction (indeed it does not, before each oscillator fully relaxes) and the flow is steady (time translational, namely, there is no energy localization or heat accumulation, especially when we extend the two oscillators to a chain). For this purpose we need to derive the evolution equations for the reduced density operator [62] of the system proper, S, after it is rendered open, as a result of tracing over the two baths they interact with and including their backaction which shows up as quantum dissipation and diffusion in the equations of motion for the reduced system. We do this by way of functional formalisms operating at two levels: 1) at the influence or effective action level, familiar to those with experience of the Feynman-Vernon influence functional [60] and the Schwinger-Keldysh ('in-in', or closed-time-path) [50] methods; 2) at the equation of motion level, obtained from the functional variation of the effective / influence action. This includes the familiar stochastic equations -the master equation (see, e.g., [61,63] for derivations, Fokker-Planck [76] or Langevin equations [67]) which is probably more widely used. Let each subsystem be a quantum oscillator following a prescribed trajectory z (i) (see, e.g., [71]), and its displacement is described by χ (i) . The baths are represented by a massless quantum scalar field φ (i) (e.g., [68]) at finite temperature. (This is what we refer to as a thermal field, it is a quantum, not a classical, field, although for Gaussian systems quantum and classical equations of motions have the same form.) The action of the total system is given by S[χ, φ] = t 0 ds 2 i=1 m 2 χ (i)2 (s) − ω 2 χ (i)2 (s) − mσ χ (1) (s)χ (2) (s) + 2 i=1 t 0 d 4 x i e i χ (i) (s)δ 3 (x i − z (i) (s))φ (i) (x i ) + 2 i=1 t 0 d 4 x i 1 2 ∂ µ φ (i) (x i ) ∂ µ φ (i) (x i ) ,(2.1) among which we have the actions that describes the oscillators S χ , the bath fields S φ , the interaction between the two oscillators S I and between each oscillator and its bath S II , respectively, S χ [χ (i) ] = t 0 ds m 2 χ (i)2 (s) − ω 2 χ (i)2 (s) , S φ [φ (i) ] = t 0 d 4 x i 1 2 ∂ µ φ (i) (x i ) ∂ µ φ (i) (x i ) , S I [χ (1) , χ (2) ] = t 0 ds −mσχ (1) (s)χ (2) (s) , S II [χ (i) , φ (i) ] = t 0 d 4 x i e i χ (i) (s)δ 3 (x i − z (i) (s))φ (i) (x i ) . Here we assume that each oscillator is linearly coupled to its own thermal bath with coupling strength 6 e i , and the oscillators are coupled with each other in the forms of (χ (1) − χ (2) ) 2 or χ (1) χ (2) (which are equivalent by a shift in the χ coordinate), with an interaction strength denoted by σ. For simplicity without loss of physical contents we let the two oscillators have the same mass m and natural frequency ω. We leave the prescribed trajectory z (i) (s) general here because the position of the oscillator changes the configuration of the quantum field which in turn affects the other oscillator, aspects which need to be included in quantum entanglement considerations (see, e.g., [81]) and in treating relativistic quantum information issues (e.g., [82]). In a later section when we turn to calculating the energy flow we can safely assume that their external (centers of mass) variables are fixed in space, and only their internal variables χ enter in the dynamics. Now we assume that the initial state of the total system S at time t = 0 is in a factorizable form 7 . ρ(0) = ρ χ ⊗ ρ β1 ⊗ ρ β2 , (2.2) where ρ χ is the initial density operator for the system proper S, consisting of two oscillators, with each oscillator described by a Gaussian wavefunction ρ χ (χ (i) a , χ (i) a ; 0) = 1 πς 2 1/2 exp − 1 2ς 2 χ (i)2 a + χ (i)2 a . (2. 3) The parameter ς is the width of the wavepacket, and the parameters χ a , χ b are the shorthand notations for χ evaluated at times t = 0 and t respectively, that is, χ a = χ(0) and χ b = χ(t). This subscript convention will also apply to other variables. Each bath is initially in its own thermal state at temperature β −1 i , so the corresponding initial density matrix is ρ βi (φ (i) a , φ (i) a ; 0) = φ (i) a |e −βiH φ [φ (i) ] |φ (i) a (2.4) H φ [φ (i) ] is the free scalar field Hamiltonian associated with the action S φ [φ (i) ]. The density operator of the total system is evolved by the unitary evolution operator U (t, 0), ρ(t) = U (t, 0) ρ(0) U −1 (t, 0) . (2.5) In the path-integral representation the total density matrix at time t is related to its values at an earlier moment t = 0 by ρ(χ (i) b , χ (i) b ; φ (i) b , φ (i) b ; t) = 2 i=1 ∞ −∞ dχ (i) a dχ (i) a ∞ −∞ dφ (i) a dφ (i) a χ (i) b χ (i) a Dχ (i) + χ (i) b χ (i) a Dχ (i) − φ (i) b φ (i) a Dφ (i) + φ (i) b φ (i) a Dφ (i) − exp 2 i=1 i S χ [χ (i) + ] − i S χ [χ (i) − ] × exp i S I [χ (1) + , χ(2)+ ] − i S I [χ (1) − , χ (2) − ] × exp 2 i=1 i S φ [φ (i) + ] − i S φ [φ (i) − ] × exp 2 i=1 i S II [χ (i) + , φ (i) + ] − i S II [χ (i) − , φ (i) − ] × ρ χ (χ (i) a , χ (i) a ; 0) 2 i=1 ρ βi (φ (i) a , φ (i) a ; 0) ,(2.6) The subscripts +, − attached to each dynamical variable indicate that the variable is evaluated along the forward and backward time paths, respectively implied by U and U −1 in (2.5). Reduced Density Operator and Green Functions When we focus on the dynamics of the oscillators S, accounting for only the gross influences of their environments but not the details, we work with the reduced density operator of S obtained by tracing out the microscopic degrees of freedom of its environment, their two baths. We obtain ρ χ (χ (i) b , χ (i) b ; t) = Tr φ (1) Tr φ (2) ρ(χ (i) b , χ (i) b ; φ (i) b , φ (i) b ; t) = ∞ −∞ 2 i=1 dχ (i) a dχ (i) a ρ χ (χ (i) a , χ (i) a , t a ) 2 i=1 χ (i) b χ (i) a Dχ (i) + χ (i) b χ (i) a Dχ (i) − × exp 2 i=1 i S χ [χ (i) + ] − i S χ [χ (i) − ] × exp i S I [χ (1) + , χ(2)+ ] − i S I [χ (1) − , χ (2) − ] × 2 i=1 exp i 2 e 2 i t 0 ds ds χ (i) + (s) − χ (i) − (s) G R, βi (s, s ) χ (i) + (s ) + χ (i) − (s ) + i χ (i) + (s) − χ (i) − (s) G H, βi (s, s ) χ (i) + (s ) − χ (i) − (s ) , (2.7) where the retarded Green's function G R, βi is defined by G R, βi (s, s ) = i θ(s − s ) Tr ρ βi φ (i) (z (i) (s), s), φ (i) (z (i) (s ), s ) = i θ(s − s ) φ (i) (z (i) (s), s), φ (i) (z (i) (s ), s ) = G R (s, s ) , (2.8) and the Hadamard function G H, βi by G H, βi (s, s ) = 1 2 Tr ρ βi φ (i) (z (i) (s), s), φ (i) (z (i) (s ), s ) . (2.9) The Hadamard function is simply the expectation value of the anti-commutator of the quantum field φ (i) , and notice that the retarded Green's function does not have any temperature dependence. The exponential containing G R, βi and G H, βi in (2.7) is the Feynman-Vernon influence functional F, F[χ + , χ − ] = e i S IF [χ+,χ−] = 2 i=1 exp i 2 e 2 i t 0 ds ds χ (i) + (s) − χ (i) − (s) G R, βi (s, s ) χ (i) + (s ) + χ (i) − (s ) + i χ (i) + (s) − χ (i) − (s) G H, βi (s, s ) χ (i) + (s ) − χ (i) − (s ) , (2.10) where S IF is called the influence action. It captures the influences of the environment on the system S. Coarse-Grained Effective Action The coarse-grained effective action (CG) S CG is made of the influence action S IF from the environment and the actions of the system by S CG [q (i) , r (i) ] = 2 i=1 S χ [χ (i) + ] − S χ [χ (i) − ] + S I [χ (1) + , χ(2)+ ] − S I [χ (1) − , χ (2) − ] + S IF [χ + , χ − ] = t 0 ds 2 i=1 mq (i) (s)ṙ (i) (s) − mω 2 q (i) (s)r (i) (s) (2.11) − mσ q (1) (s)r (2) (s) + q (2) (s)r (1) (s) + 2 i=1 e 2 i t 0 ds ds q (i) (s)G R (s, s )r (i) (s ) + i 2 q (i) (s)G H, βi (s, s )q (i) (s ) . (2.12) Here we have introduced the relative coordinate q (i) and the centroid coordinate r (i) ,q (i) = χ (i) + − χ (i) − , r (i) = 1 2 χ (i) + + χ (i) − . (2.13) Anticipating the oscillator chain treated in a later section, it is convenient to introduce the vectorial notations by q =   q (1) q (2)   , r =   r (1) r (2)   , Ω Ω Ω 2 =   ω 2 σ σ ω 2   , G R (s, s ) =   G R (s, s ) 0 0 G R (s, s )   , G H (s, s ) =   G H, β1 (s, s ) 0 0 G H, β2 (s, s )   , and from now on assume the coupling strengths e 1 and e 2 are the same, that is, e 1 = e 2 = e. In so doing the coarse-grained effective action can be written into a more compact form S CG = t 0 ds mq T (s) ·ṙ(s) − m q(s) · Ω Ω Ω 2 · r(s) (2.14) + e 2 s 0 ds q T (s) · G R (s, s ) · r(s ) + i q T (s) · G H (s, s ) · q(s ) . Formally, (2.14) is very general, and can be readily applied to the configuration that oscillators simultaneously interact with two different thermal baths. Since the coarse-grain effective action S CG governors the dynamics of the system S under the influence of the environments, the time evolution of the reduced density matrix can thus be constructed with S CG . We write the reduced density matrix as ρ χ (χ (i) b , χ (i) b ; t) = ∞ −∞ 2 i=1 dq (i) a dr (i) a J(q (i) b , r (i) b , t; q (i) a , r (i) a , 0) ρ χ (q (i) a , r (i) a ; 0) , (2.15) where J(q (i) b , r (i) b , t; q (i) a , r (i) a , 0) = 2 i=1 q (i) b q (i) a Dq (i) r (i) b r (i) a Dr (i) exp i S CG [q (i) , r (i) ] ,(2.16) is the evolutionary operator for the reduced density matrix (from time 0 to t). The path integral in the evolutionary operator J can be evaluated exactly because the coarse-grained effective action (2.14) is quadratic in q and r. We won't pursuit this route in this paper but it will be used later for our study of nonlinear systems. Stochastic Effective Action and Langevin Equations We now proceed to derive the stochastic equations and find their solutions Stochastic Effective Action Using the Feynman-Vernon identity for Gaussian integrals we can express the imaginary part of the coarse-grained effective action S CG in (2.14) in terms of the distributional integral of a Gaussian noise ξ ξ ξ, exp − e 2 2 t 0 ds t 0 ds q T (s) · G H (s, s ) · q(s ) = Dξ ξ ξ P[ξ ξ ξ] exp i t 0 ds q T (s) · ξ ξ ξ(s) , (3.1) with the moments of the noise given by ξ ξ ξ(s) = 0 , ξ ξ ξ(s) · ξ ξ ξ T (s ) = e 2 G H (s, s ) . (3.2) The angular brackets here denote the ensemble average over the probability distribution functional P[ξ ξ ξ]. Thus we may write the exponential of the coarsegrained effective action S CG in a form of a distributional integral e i S CG [q,r] = Dξ ξ ξ P[ξ ξ ξ] exp i t 0 ds mq T (s) ·ṙ(s) − m q(s) · Ω Ω Ω 2 · r(s) + q T (s) · ξ ξ ξ(s) + s 0 ds q T (s) · G R (s, s ) · r(s ) = Dξ ξ ξ P[ξ ξ ξ] e i S SE [q,r,ξ ξ ξ] , (3.3) where S SE is the stochastic effective action [65] given by S SE [q, r, ξ ξ ξ] = t 0 ds mq T (s) ·ṙ(s) − m q(s) · Ω Ω Ω 2 · r(s) + q T (s) · ξ ξ ξ(s) + s 0 ds q T (s) · G R (s, s ) · r(s ) . (3.4) At this point, we may use the stochastic effective action to either derive the Langevin equation, or to construct the stochastic reduced density matrix. We proceed with the former route below. Langevin Equations Taking the variation of S SE with respect to q and letting q = 0, we arrive at a set of Langevin equation, mχ χ χ(s) + m Ω Ω Ω 2 · χ χ χ(s) − s 0 ds G R (s, s ) · χ χ χ(s ) = ξ ξ ξ(s) . (3.5) Formally, this equation of motion describes the time evolution of the reduced system under the non-Markovian influence of the environment. The influence is manifested in the form of the local stochastic driving noise ξ ξ ξ and the nonlocal dissipative force, s 0 ds G R (s, s ) · χ χ χ(s ) . In general, this nonlocal expression implies the evolution of the reduced system is history-dependent. However, in the current configuration, the retarded Green's functions matrix has a very simple form G R (s, s ) = − e 2 2π θ(s − s ) δ (s − s ) 1 0 0 1 , (3.6) so the Langevin equation reduces to a purely local form mχ χ χ(s) + 2mγχ χ χ(s) + m Ω Ω Ω 2 R · χ χ χ(s) = ξ ξ ξ(s) ,(3.7) where Ω Ω Ω 2 R is obtained by absorbing the divergence of G R (s, s ) into the diagonal elements of the original Ω Ω Ω 2 , and γ = e 2 /8πm > 0. We immediately see that eq. (3.7) in fact describes nothing but a bunch of coupled, driven, damped oscillators. Thus the Langevin equation has a very intuitive interpretation 8 . The general solution to (3.5) or (3.7) can be expanded in terms of fundamental solution matrices D 1 and D 2 . They are simply the homogeneous solutions of the corresponding equation of motion but satisfy a particular set of initial conditions, D 1 (0) = 1 ,Ḋ 1 (0) = 0 , (3.8) D 2 (0) = 0 ,Ḋ 2 (0) = 1 . (3.9) Thus the general solution is given by χ χ χ(s) = D 1 (s) · χ χ χ(0) + D 2 (s) ·χ χ χ(0) + 1 m s 0 ds D 2 (s − s ) · ξ ξ ξ(s ) . (3.10) This can be the starting point to compute the variance of physical observables, their correlation functions or the variance of the conjugated variables. For example, the symmetrized correlation functions of χ χ χ (where the curly brackets below represent the anti-commutator) are given by tem, whose dissipative behavior will in most cases diminish while the reduced system relaxes in time. To accommodate both the quantum and the stochastic aspects we only need to extend the meaning of the angular brackets · · · to that of both taking the expectation value and the distributional average we can properly incorporate the intrinsic quantum nature of the reduced system and the noise effects, as demonstrated in (3.11). This approach works very nicely for the quantities which take on symmetric ordering, and the result is consistent with that computed by the reduced density matrix [85]. It may become problematic if the quantities of interest take on a different ordering from the symmetric one. 1 2 {χ χ χ(t) · χ χ χ T (t )} = D 1 (t) · χ χ χ(0) · χ χ χ T (0) · D 1 (t ) + D 2 (t) · χ χ χ(0) ·χ χ χ T (0) · D 2 (t ) + e 2 m 2 t 0 ds t 0 ds D 2 (t − s) · G H (s, s ) · D 2 (t − s ) , Likewise we can find the variances pertinent to the reduced system in the NESS configurations by the Langevin equation. For example, the elements of the covariance matrix are given by ∆ 2 χ χ χ (l) b = e 2 m 2 t 0 ds ds D 2 (s) · G H (s − s ) · D 2 (s ) ll , (3.12) ∆ 2 p (l) b = e 2 t 0 ds ds Ḋ 2 (s) · G H (s − s ) ·Ḋ 2 (s ) ll , (3.13) 1 2 {∆χ χ χ (l) b , ∆p (l) b } = e 2 m t 0 ds ds D 2 (s) · G H (s − s ) ·Ḋ 2 (s ) ll , (3.14) at late time t γ −1 . The contributions from the homogenous solutions are transient, exponentially decaying with time, so they almost vanish on the evolution time scale greater than γ −1 . In the limit t → ∞, eqs. (3.12)-(3.14) become ∆ 2 χ χ χ (l) b = e 2 m 2 ∞ −∞ dω 2π D D D 2 (ω) · G H (ω) · D D D * 2 (ω) ll , (3.15) ∆ 2 p (l) b = e 2 ∞ −∞ dω 2π ω 2 D D D 2 (ω) · G H (ω) · D D D * 2 (ω) ll , (3.16) 1 2 {∆χ χ χ (l) b , ∆p (l) b } = 0 , (3.17) where D D D 2 (ω) is given by D D D 2 (ω) = −ω 2 I + Ω Ω Ω 2 R − i 2γω I −1 , Ω Ω Ω 2 R =   ω 2 R σ σ ω 2 R   . (3.18) We see that its inverse Fourier transformation D D D 2 (s) is the kernel to the Langevin equation, that is, it satisfies (3.7) with an impulse force described by a delta function δ(s). Furthermore, D D D 2 (s) is related to the fundamental solution matrix D 2 (s) by D D D 2 (s) = θ(s) D 2 (s). The results in (3.15) 19) or in this case G H (ω) =    coth β 1 ω 2 0 0 coth β 2 ω 2    · Im G R (ω) ,(3.    G H, β1 (ω) 0 0 G H, β2 (ω)     =    coth β 1 ω 2 0 0 coth β 2 ω 2    ·     Im G R (ω) 0 0 Im G R (ω)     . Note that each subsystem with its private thermal bath still has its own fluctuationdissipation relation G H, βi (ω) = coth β i ω 2 Im G R (ω) . (3.20) Although the fluctuation-dissipation relation is diagonal, the matrix D D D 2 (ω) in (3.15) and (3.16) will blend together the effects of both thermal baths. For example, let us examine ∆ 2 χ χ χ (1) b . In the asymptotic future, it is given by ∆ 2 χ χ χ (1) b = 1 m 2 ∞ −∞ dω 2π D D D 11 2 (ω) 2 G 11 H (ω) + D D D 12 2 (ω) 2 G 22 H (ω) (3.21) as seen from (3.15). Physically it is not surprising since each oscillator's dynamics needs to reckon with the other oscillator and its bath, albeit indirectly, and thus is determined by both baths. Alternatively, this feature can be seen from the dynamics of the normal modes of the reduced system that diagonalize Ω Ω Ω 2 R . Let v 1 and v 2 be the eigenvectors of Ω Ω Ω 2 R , with eigenvalues ω 2 + = ω 2 R + σ and ω 2 − = ω 2 R − σ, respectively. The 2 × 2-matrix U = (v 1 , v 2 ) can be used to diagonalize the Ω Ω Ω 2 R matrix into Λ Λ Λ 2 = U T · Ω Ω Ω 2 R · U =   ω 2 + 0 0 ω 2 −   . (3.22) Correspondingly, q and r will be rotated by U to u u u = U T · q , v v v = U T · r . (3.23) If both oscillators are fixed in space, as is the case for our investigation of the NESS, then the Green's functions looks much simpler because they don't have spatial dependence. Thus the new Green's function matrix G G G, transformed by U, is related to the original one G by G G G R (s, s ) = U T · G R (s, s ) · U , G G G H (s, s ) = U T · G H (s, s ) · U . (3.24) Writing out the matrix G G G explicitly, e.g., taking G G G H as an example, yields G G G H (s, s ) = U T · G H (s, s ) · U = e 2 2   1 1 1 −1   ·   G H, β1 (s, s ) 0 0 G H, β2 (s, s )   ·   1 1 1 −1   = e 2 2   G H, β1 (s, s ) + G H, β2 (s, s ) G H, β1 (s, s ) − G H, β2 (s, s ) G H, β1 (s, s ) − G H, β2 (s, s ) G H, β1 (s, s ) + G H, β2 (s, s )   . (3.25) Again we see that when we decompose the degrees of the freedom of the oscillators into their normal modes, the effects from both thermal baths are superposed. For later reference, we explicitly write down the elements of the matrix D D D 2 (ω) as, D D D 2 (ω) = −ω 2 I+Ω Ω Ω 2 R −i 2γω I −1 =       −ω 2 + ω 2 R − i 2γω det D D D −1 2 (ω) − σ det D D D −1 2 (ω) − σ det D D D −1 2 (ω) −ω 2 + ω 2 R − i 2γω det D D D −1 2 (ω)       , (3.26) with det D D D −1 2 (ω) = ω 2 − ω 2 R + i 2γω 2 − σ 2 . Thus the variance? of χ (1) takes the form ∆ 2 χ χ χ (1) b (3.27) = 1 m 2 ∞ −∞ dω 2π    ω 2 − ω 2 R 2 + 4γ 2 ω 2 (ω 2 − ω 2 R + σ) 2 + 4γ 2 ω 2 (ω 2 − ω 2 R − σ) 2 + 4γ 2 ω 2 G 11 H (ω) + σ 2 (ω 2 − ω 2 R + σ) 2 + 4γ 2 ω 2 (ω 2 − ω 2 R − σ) 2 + 4γ 2 ω 2 G 22 H (ω)    , and G ii H (ω) = ω 4π coth β i ω 2 . (3.28) Apparently at late time t → ∞, the displacement variance of O 1 in (3.27) approaches a time-independent constant. Stochastic Reduced Density Matrix In the context of open quantum systems we mention two ways to obtain the desired physical quantities associated with the dynamics of the reduced system: one is by way of the Langevin equation, which is less formal, more flexible and physically intuitive. It is particularly convenient if the quantities at hand involve noise either from the environment or externally introduced. The other is by way of the reduced density operator approach, which is easy to account for the intrinsic quantum dynamics of the reduced system and to enforce the operator ordering. The drawback is that it is less straightforward to use this method to compute the expectation values of operators corresponding to physical variables which contain the environmental noise because the influence functional does not have explicit dependence on the noise. Only after invoking the Feynman-Vernon Gaussian integral identity would the noise of the environment, now in the form of a classical stochastic forcing term, be made explicit. We will show in this section a way to combine the advantages of these two approaches, by incorporating the noise from the environment in the reduced density matrix, whose dynamical equation is obtained by taking the functional variation of the stochastic effective action. Let us rewrite the reduced density matrix (2.15) in terms of the stochastic effective action S SE in (3.4), ρ χ (q b , r b ; t) = ∞ −∞ dq a dr a ρ χ (q a , r a ; 0) q b qa Dq r b ra Dr exp i S CG [q, r] = ∞ −∞ dq a dr a ρ χ (q a , r a ; 0) q b qa Dq r b ra Dr Dξ ξ ξ P[ξ ξ ξ] e i S SE [q,r,ξ ξ ξ] = Dξ ξ ξ P[ξ ξ ξ] ρ χ (q b , r b , t; ξ ξ ξ] ,(3.29) The term in the integrand ρ χ (q b , r b , t a ; ξ ξ ξ] is called the stochastic reduced density matrix which is seen to have explicit dependence on the noise ξ ξ ξ of the environment: In this rendition, the reduced system, now driven by a classical stochastic force of the environment (by virtue of the Feynman-Vernon transform) as part of the influence from the environment, is described by the stochastic density matrix. ρ χ (q b , r b , t b ; ξ ξ ξ] = ∞ −∞ dq a dr a ρ χ (q a , For each realization of the environmental noise, the reduced system evolves to a state described by the density matrix (3.30). Different realizations make the system end up at different final states with probability given by P[ξ ξ ξ]. The reduced system is ostensibly non-conservative with the presence of friction and noise terms, and rightly so, as these effects originate from the interaction between the system and its environment, which in the case understudy consists of two baths. These two processes are, however, constrained by the fluctuation-dissipation relation associated with each bath. This relation plays a fundamental role in the energy flow balance between the system and the bath: fluctuations in the environment show up as noise and its backaction on the system gives rise to dissipative dynamics. How this relation bears on the problem of equilibration for a quantum system interacting with one bath is pretty well known. We will show explicitly below how this relation underscores the approach to steady state for systems in nonequilibrium. To compute the quantum and stochastic average of a dynamical variable, say, f (χ χ χ; ξ ξ ξ] at time t, which contains both the stochastic variable ξ ξ ξ and the canonical variables χ χ χ of the reduced system, we simply evaluate the trace associated with the system variables and the ensemble average associated with the environmental noise, f (χ χ χ; ξ ξ ξ] = Dξ ξ ξ P[ξ ξ ξ] Tr χ ρ χ (t; ξ ξ ξ] f (χ χ χ; ξ ξ ξ] . (3.32) The procedure in (3.32) is as follows: for each specific realization of the stochastic source, we first calculate the expectation value of the quantum operator f (χ χ χ; ξ ξ ξ] for the state described by the reduced density operator ρ χ (t; ξ ξ ξ]. The obtained result, still dependent on the stochastic variable, will then be averaged over according to the probability distribution P[ξ ξ ξ] of the noise. As an example, we will compute the power P ξ1 delivered by the stochastic force (noise) ξ 1 from Bath 1 to Oscillator 1. The power P ξ1 is defined by P ξ1 (t) = ξ 1 (t)χ (1) (t) ,(3.33) and observe that p (1) = mχ (1) . Thus we have P ξ1 (t) = 1 m ξ 1 (t) p (1) (t) = − i m Dξ ξ ξ P[ξ ξ ξ] ∞ −∞ dq b dr b δ(q b ) ξ 1 (t) ∂ ∂χ (1) b ρ χ (q b , r b , t; ξ ξ ξ) , (3.34) where the momentum p (i) canonical to the coordinate χ (i) is given by 35) and the trace over the dynamical variables of the reduced system is defined as p (i) = −i ∂ ∂χ (i) ,(3.Tr χ = ∞ −∞ dq b dr b δ(q b ) . (3.36) Since the initial state of the reduced system is a Gaussian state and the stochastic effective action is quadratic in the system's variables, the final state will remain Gaussian and the corresponding reduced density operator thus can be evaluated exactly. To derive the explicit form of the reduced density matrix, we first evaluate the path integrals in (3.30), 37) where N is the normalization constant, which can be determined by the unitarity requirement. It is given by q b qa Dq r b ra Dr exp i t 0 ds mq T (s) ·ṙ(s) − m q(s) · Ω Ω Ω 2 · r(s) + q T (s) · ξ ξ ξ(s) + s 0 ds q T (s) · G R (s, s ) · r(s ) = N exp i m q T b ·ṙ b − i m q T a ·ṙ a ,(3.N = m 2π 2 detμ µ µ(0) .r(s) = ν ν ν(s) · r a + µ µ µ(s) · r b + J J J r (s) ,(3.39) for 0 ≤ s ≤ t. The functions µ µ µ(s), ν ν ν(s) are defined by µ µ µ(s) = D 2 (s) · D −1 2 (t) , (3.40) ν ν ν(s) = D 1 (s) − D 2 (s) · D −1 2 (t) · D 1 (t) ,(3.41) and the current J J J r (s) is given by J J J r (s) = 1 m s 0 ds D 2 (s − s ) · ξ ξ ξ(s ) − 1 m t 0 ds D 2 (s) · D −1 2 (t) · D 2 (t − s ) · ξ ξ ξ(s ) .(3.42) Additionally, we can write the partial derivative ∂/∂χ as ∂ ∂χ (1) b = ∂ ∂q (1) b + 1 2 ∂ ∂r (1) b . (3.43) Here as an example of the stochastic reduced density matrix approach, we will provide greater details in the derivation of the power delivered by the stochastic force ξ ξ ξ 1 on O 1 . With these, (3.34) becomes P ξ1 (t) = − i m N Dξ ξ ξ P[ξ ξ ξ] ∞ −∞ dq b dr b δ(q b ) ∞ −∞ dq a dr a ρ(q a , r a , 0) ξ 1 (t) × ∂ ∂q (1) b + 1 2 ∂ ∂r (1) b exp i m q T b ·ṙ b − i m q T a ·ṙ a = − i m N Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) ∞ −∞ dq b dr b δ(q b ) ∞ −∞ dq a dr a ρ(q a , r a , 0) × i mṙ (1) b + i m 2 q T b ·μ µ µ(t) − q T a ·μ µ µ(0) 11 exp i m q T b ·ṙ b − i m q T a ·ṙ a = N Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) ∞ −∞ dr b ∞ −∞ dq a dr a ρ(q a , r a , 0) × exp −i m q T a · ν ν ν(0) · r a +μ µ µ(0) · r b +J J J r (0) × r T a ·ν ν ν T (t) + r T b ·μ µ µ T (t) +J J J T r (s) − 1 2 q T a ·μ µ µ(0) 11 = N Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) ∞ −∞ dr b ∞ −∞ dq a dr a ρ(q a , r a , 0) × ν ν ν 1m (t) r (m) a +J (1) r (t) − 1 2μ µ µ T 1m (0) q (m) a + i mμ µ µ 1m (t)μ µ µ −1 mn (0) ∂ ∂q (n) a × exp −i m q T a · ν ν ν(0) · r a +μ µ µ(0) · r b +J J J r (0) = N 2π m 2 1 πς 2 2 2 Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) ∞ −∞ dq a dr a exp − r T a · r a ς 2 − q T a · q a 4ς 2 × exp −i m q T a · ν ν ν(0) · r a +J J J r (0) × ν ν ν 1m (t) r (m) a +J (1) r (t) − 1 2μ µ µ T 1m (0) q (m) a + i mμ µ µ 1m (t)μ µ µ −1 mn (0) ∂ ∂q (n) a δ (2) q T a ·μ µ µ(0) = N detμ µ µ(0) 2π m 2 1 πς 2 2 2 Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) ∞ −∞ dq a dr a δ (2) q a exp − r T a · r a ς 2 − q T a · q a 4ς 2 × exp −i m q T a · ν ν ν(0) · r a +J J J r (0) × ν ν ν(t) · r a +J J J r (t) − 1 2μ µ µ T (0) · q a − i mμ µ µ(t) ·μ µ µ −1 (0) · −i m ν ν ν(0) · r a +J J J r (0) − 1 2ς 2 q a 11 = N detμ µ µ(0) 2π m 2 1 πς 2 2 2 Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) ∞ −∞ dr a exp − r T a · r a ς 2 × J J J r (t) −μ µ µ(t) ·μ µ µ −1 (0) ·J J J r (0) 11 = N detμ µ µ(0) 2π m 2 Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) J J J r (t) −μ µ µ(t) ·μ µ µ −1 (0) ·J J J r (0) 11 . (3.44) The expressions in the square brackets can be reduced tȯ J J J r (t) −μ µ µ(t) ·μ µ µ −1 (0) ·J J J r (0) = 1 m t 0 ds Ḋ 2 (t − s ) · ξ ξ ξ(s ) . (3.45) Thus the power delivered to Oscillator 1 from Bath 1 is equal to P ξ1 (t) = 1 m Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t) t 0 ds Ḋ 1m 2 (t − s ) ξ m (s ) = 1 m t 0 ds Ḋ 1m 2 (t − s ) Dξ ξ ξ P[ξ ξ ξ] ξ 1 (t)ξ m (s ) = e 2 m t 0 ds Ḋ 1m 2 (t − s ) G 1m H (t − s ) . (3.46) We will see that this is exactly the same as (4.14) except that (4.14) is expressed in terms of the normal modes. Alternatively, we can compare (3.46) with (4.46). They are the same. Energy Transport, Power Balance and Stationarity Condition As an important application of the formalism developed so far we examine in this section how energy is transported in the combined system S of two oscillators with two baths, in the nature of heat flux, to see whether there is any build-up or localization of energy (the answer is no), or whether there is a balance in the energy flow which signifies the existence of a nonequilibrium steady state (the answer is yes, with several power balance relations). Energy Flow between Components To study the energy transport in the system, as we mentioned in the last section, it is physically more transparent to use the Langevin equations (3.7) instead of the matrix form we obtained in the previous section. In this section we will illustrate this approach by deriving the Langevin equations for each subsystem, then analyze the heat transfer and energy flux balance relations from them. acted on by these three seemingly unrelated forces. Certain correlations will be established among them over time. We will examine the interplay among these forces and their roles in energy transport, the power they deliver and the possible existence of power balance relations, which provide the conditions for the establishment of a nonequilibrium steady state. mχ (1) (s) + 2mγχ (1) (s) + mω 2 R χ (1) (s) + mσ χ (2) (s) = ξ (1) (s) , (4.1) mχ (2) (s) + 2mγχ (2) (s) + mω 2 R χ (2) (s) + mσ χ (1) (s) = ξ (2) (s) . First we will find the normal modes of the coupled motion (4.1)-(4.2). By an appropriate linear combination of the original dynamical variables χ (1) , χ (2) where ω 2 ± = ω 2 R ± σ. These are the normal modes of the coupled dynamics, and they act as two independent driven, damped oscillators. Here we require σ < ω 2 R to avoid any instability in the evolution of the normal modes. Assume the fundamental solutions to (4.4) and (4.5) are given by d χ + = [χ (1) + χ (2) ]/2 , χ − = χ (1) − χ (2) ,(4.χ − (s) = d (−) 1 (s)χ − (0) + d (−) 2 (s)χ − (0) + 1 m s 0 ds d (−) 2 (s − s )ξ − (s ) . (4.9) The corresponding solutions to χ (1) (s) and χ (2) (s) can be obtained by the superposition of the normal modes, χ (1) (s) = χ + (s) + 1 2 χ − (s) ,(4. 10) χ (2) (s) = χ + (s) − 1 2 χ − (s) ,(4.11) such that χ (1) (s) = 1 2 d (+) 1 (s) + d (−) 1 (s) χ (1) (0) + 1 2 d (+) 1 (s) − d (−) 1 (s) χ (2) (0) + 1 2 d (+) 2 (s) + d (−) 2 (s) χ (1) (0) + 1 2 d (+) 2 (s) − d (−) 2 (s) χ (2) (0) + 1 2m s 0 ds d (+) 2 (s − s ) + d (−) 2 (s − s ) ξ 1 (s ) + 1 2m s 0 ds d (+) 2 (s − s ) − d (−) 2 (s − s ) ξ 2 (s ) , (4.12) and likewise χ (2) (s) = 1 2 d (+) 1 (s) − d (−) 1 (s) χ (1) (0) + 1 2 d (+) 1 (s) + d (−) 1 (s) χ (2) (0) + 1 2 d (+) 2 (s) − d (−) 2 (s) χ (1) (0) + 1 2 d (+) 2 (s) + d (−) 2 (s) χ (2) (0) + 1 2m s 0 ds d (+) 2 (s − s ) − d (−) 2 (s − s ) ξ 1 (s ) + 1 2m s 0 ds d (+) 2 (s − s ) + d (−) 2 (s − s ) ξ 2 (s ) . (4.13) These are nothing but the superposition of normal modes in a tethered motion. Now we are ready to compute the power or energy flow or heat transfer between subsystems. As we have stressed before, we do not a priori assume the existence of NESS in this system. The energy flow between the neighboring components of the total system is not necessarily time-independent, let alone having the same magnitude. Rather, we seek to demonstrate the presence of a steady energy flow from one bath to the other after the systems is fully relaxed on a time scale t γ −1 . Energy Flow between B 1 and S 1 The interactions between Subsystem 1 and Bath 1 are summarized in the stochastic force ξ 1 and the dissipative self-force of −2mγχ (1) , after we coarsegrained the degrees of freedom of B 1 . These two forces mediate the energy flow between S 1 and B 1 . The average power delivered to Subsystem 1 by the stochastic force (noise) of Bath 1 is given by P ξ1 (t) = ξ 1 (t)χ (1) (t) = 1 2m 2 t 0 ds ḋ (+) 2 (t − s) +ḋ (−) 2 (t − s) ξ 1 (t)ξ 1 (s) = e 2 2m t 0 ds ḋ (+) 2 (t − s) +ḋ (−) 2 (t − s) G β1 H (t − s) y = t − s = e 2 2m t 0 dy ḋ (+) 2 (y) +ḋ (−) 2 (y) G β1 H (y) ,(4.14) with ξ 1 (t)ξ 1 (s) = e 2 G β1 H (t − s). It tells us the rate at which the energy is transported to Subsystem 1 from Bath 1 by means of the stochastic noise. Since we are particular interested in the existence of NESS, we will pay special attention to the late-time behavior of the energy transport. In the limit t → ∞ when the motion of Subsystem 1 is fully relaxed and noting that the fundamental solutions d i (s) = 0 if s < 0, we write this average power as P ξ1 (∞) = e 2 2m ∞ −∞ dy ḋ (+) 2 (y) +ḋ (−) 2 (y) G β1 H (y) = 4πγ ∞ −∞ dω 2π − i ω d (+) 2 (ω) + d (−) 2 (ω) G β1 H (ω) ,(4.15) where γ = e 2 /8πm, and we have defined the Fourier transformation as (4.16) so that the convolution integrals are given by f (t) = ∞ −∞ dω 2πf (ω) e −i ωt , ⇔ f (ω) = ∞ −∞ dt f (t) e +i ωt ,∞ −∞ dt f (t)g(t) = ∞ −∞ dω 2π f (ω) g(−ω) , (4.17) ∞ −∞ dt ∞ −∞ dt f (t)g(t )h(t − t ) = ∞ −∞ dω 2π f (ω) g(−ω) h(ω) . (4.18) The Fourier transforms d (+) 2 (ω), d (+) 2 (ω), and G β1 H (ω) are d (±) 2 (ω) = 1 ω 2 ± − ω 2 − i 2γω , (4.19) G β H (ω) = coth βω 2 Im G R (ω) = ω 4π coth βω 2 . (4.20) Here and henceforth, we will not make distinction between between D D D i (t) and However, in practice, they serve the same purpose to the current case. Thus when we refer to the fundamental solution, we use the notations D i (t) or d i (t) for both cases unless mentioned otherwise. The power done by the dissipative force −2mγχ (1) of Subsystem 1 is P γ1 (t) = −2mγ χ (1) 2 (t) = −4πγ 2 t 0 ds t 0 ds (4.21) ḋ (+) 2 (t − s) +ḋ (−) 2 (t − s) ḋ (+) 2 (t − s ) +ḋ (−) 2 (t − s ) G β1 H (s − s ) + ḋ (+) 2 (t − s) −ḋ (−) 2 (t − s) ḋ (+) 2 (t − s ) −ḋ (−) 2 (t − s ) G β2 H (s − s ) . Here we have ignored the contributions independent of the baths, which will become exponentially negligible after the subsystems are fully relaxed. We also assumed that both baths are independent of each other and the initial states of both subsystems are not correlated with either bath. We see right away that the dissipation power already depends on both reservoirs; the connection of System 1 with Bath 2 is established through the coupling mσχ (1) χ (2) between the two subsystems. Thus in this case that one should not expect that at late times the power delivered by the stochastic force or noise be exactly in balance with the dissipative power as is the equilibrium case for a single Brownian oscillator in contact with one bath. After the subsystem is fully relaxed, the power done by the self-force becomes P γ1 (∞) = −4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) +d (−) 2 (ω) d (+) 2 (ω) +d (−) 2 (ω) * G β1 H (ω) + d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) −d (−) 2 (ω) * G β2 H (ω) , (4.22) where we have used the convolution integral (4.18) and (4.19). It is instructive to take a closer look into the expressions in (4.22) that are associated withG β1 H of Bath 1. We observe that P (1) γ1 (∞) = −4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) +d (−) 2 (ω) d (+) 2 (ω) +d (−) 2 (ω) * G β1 H (ω) = i πγ ∞ −∞ dω 2π ω d (+) 2 (ω) +d (−) 2 (ω) −d (+) 2 (−ω) −d (−) 2 (−ω) G β1 H (ω) + ∞ −∞ dω 2π ω 2 d (−) 2 (ω)d (+) * 2 (ω) +d (+) 2 (ω)d (−) * 2 (ω) G β1 H (ω) = i 2πγ ∞ −∞ dω 2π ω d (+) 2 (ω) +d (−) 2 (ω) G β1 H (ω) + ∞ −∞ dω 2π ω 2 d (−) 2 (ω)d (+) * 2 (ω) +d (+) 2 (ω)d (−) * 2 (ω) G β1 H (ω) , in which we have used d (±) 2 (ω)d (±) * 2 (ω) = − i 4γω d (±) 2 (ω) −d (±) * 2 (ω) = − i 4γω d (±) 2 (ω) −d (±) 2 (−ω) , (4.23) G β H (ω) = G β H (−ω) . (4.24) Now if we combine this contribution P (1) γ1 (∞) in the total dissipative power P γ1 (∞) with P ξ1 (∞), we end up with P ξ1 (∞) + P (1) γ1 (∞) = 4πγ ∞ −∞ dω 2π − i ω d (+) 2 (ω) +d (−) 2 (ω) G β1 H (ω) + 2πγ ∞ −∞ dω 2π i ω d (+) 2 (ω) +d (−) 2 (ω) G β1 H (ω) + ∞ −∞ dω 2π ω 2 d (−) 2 (ω)d (+) * 2 (ω) +d (+) 2 (ω)d (−) * 2 (ω) G β1 H (ω) = −2πγ ∞ −∞ dω 2π i ω d (+) 2 (ω) +d (−) 2 (ω) G β1 H (ω) + ∞ −∞ dω 2π ω 2 d (−) 2 (ω)d (+) * 2 (ω) +d (+) 2 (ω)d (−) * 2 (ω) G β1 H (ω) = 4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) −d (−) 2 (ω) * G β1 H (ω) . (4.25) This implies that after the subsystems are fully relaxed, the net energy transport rate, or power input into Subsystem 1 from Bath 1 is given by P ξ1 (∞) + P γ1 (∞) = P ξ1 (∞) + P (1) γ1 (∞) + P (2) γ1 (∞) (4.26) = −4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) −d (−) 2 (ω) * G β2 H (ω) −G β1 H (ω) . Note its dependence onG β2 H (ω) −G β1 H (ω), which vanishes when there is no temperatures difference between the two baths, β −1 2 = β −1 1 . Energy Flow between S 1 and S 2 Next we consider the energy flow between Subsystems 1 and 2. The power delivered by Subsystem 2 to Subsystem 1 is given by P 21 (t) = −mσ χ (2) (t)χ (1) (t) = − σ 4m t 0 ds t 0 ds d (+) 2 (t − s) − d (−) 2 (t − s) ḋ (+) 2 (t − s ) +ḋ (−) 2 (t − s ) ξ 1 (s)ξ 1 (s ) + d (+) 2 (t − s) + d (−) 2 (t − s) ḋ (+) 2 (t − s ) −ḋ (−) 2 (t − s ) ξ 2 (s)ξ 2 (s ) + homogeneous terms independent of stochastic forces . In the limit t → ∞, the homogeneous terms vanish and we are left with P 21 (∞) (4.28) = i 2πσγ ∞ −∞ dω 2π ω d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) +d (−) 2 (ω) * G β2 H (ω) −G β1 H (ω) . Observe the similarity in form with (4.26) except for the sign difference in the second square bracket. Let us now compute the average power P 12 (t) = −mσ χ (1) (t)χ (2) (t) Subsystem 1 delivers to Subsystem 2 via their mutual interaction. In the same manners as we arrive at (4.28), we find that at late time t → ∞, the power P 12 (∞) is given by P 12 (∞) = i 2πσγ ∞ −∞ dω 2π ω d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) +d (−) 2 (ω) * × G β2 H (ω) −G β1 H (ω) = −P 21 (∞) . (4.29) Note this is NOT the consequence of Newton's third law because P 12 is not the time rate of work ( = power ) done by the reaction force, namely, the force Subsystem 2 exerts on Subsystem 1. Energy Flow between S 2 and B 2 Finally let us look at the energy flow between Subsystem 2 and Bath 2. First, the average power delivered by the stochastic force ξ 2 from Bath 2 on Subsystem 2 is P ξ2 (t) = ξ 2 (t)χ (2) (t) . At late limit t → ∞, we have P ξ2 (∞) = 4πγ ∞ −∞ dω 2π − i ω d (+) 2 (ω) +d (−) 2 (ω) G β2 H (ω) . (4.30) Similarly the power delivered by the dissipation force −2mγχ (2) to this subsystem is defined by P γ2 (t) = −2mγ χ (2) 2 (t) , and it becomes P γ2 (∞) = −4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) −d (−) 2 (ω) * G β1 H (ω) + d (+) 2 (ω) +d (−) 2 (ω) d (+) 2 (ω) +d (−) 2 (ω) * G β2 H (ω) , (4.31) in the limit t → ∞. Following the procedures that lead to (4.26), we find that after the subsystems are fully relaxed, the net power flows into Subsystem 2 from Bath 2 is P ξ2 (∞) + P γ2 (∞) (4.32) = 4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) −d (−) 2 (ω) * G β2 H (ω) −G β1 H (ω) . Compared to the energy flow from B 1 to S 1 , namely, P ξ1 (∞) + P γ1 (∞) derived in (4.26) , this carries the opposite sign. This has to be the case for a stationary state to be established. It says that after the system S = S 1 +S 2 is fully relaxed, the energy which flows into S from Bath 1 at temperature β −1 1 is the same as that out of S into Bath 2 at temperature β −1 2 . One last task remains in demonstrating the existence of a NESS: we must show that the magnitude of energy flow between the system S and either bath is also the same as the energy flow between the two subsystems, that is, −P 21 (∞) = P ξ1 (∞) + P γ1 (∞). We now show that this is indeed so. Condition of Stationarity Owing to the facts that d (+) 2 (ω) −d (−) 2 (ω) = −2σd (+) 2 (ω)d (−) 2 (ω) , (4.33) d (+) 2 (ω) +d (−) 2 (ω) = 2 ω 2 R − ω 2 − i 2γω d (+) 2 (ω)d (−) 2 (ω) , (4.34) we can write P 21 (∞) as P 21 (t) = i 2πσγ ∞ −∞ dω 2π ω d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) +d (−) 2 (ω) * G β2 H (ω) −G β1 H (ω) = 16πγ 2 σ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) 2 d (−) 2 (ω) 2 G β2 H (ω) −G β1 H (ω) , (4.35) because the imaginary of the integrand is an odd function of ω. As for P ξ1 (∞) + P γ1 (∞), we can also show that P ξ1 (∞) + P γ1 (∞) = −4πγ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) −d (−) 2 (ω) d (+) 2 (ω) −d (−) 2 (ω) * G β2 H (ω) −G β1 H (ω) = −16πγ 2 σ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) 2 d (−) 2 (ω) 2 G β2 H (ω) −G β1 H (ω) , (4.36) with ω 2 ± = ω 2 R ± σ. Eqs. (4.35) and (4.36) indicate that indeed we have −P 21 (∞) = P ξ1 (∞) + P γ1 (∞), or P 21 (∞) + P ξ1 (∞) + P γ1 (∞) = 0 . (4.37) This has an interesting consequence. From the Langevin equations (4.1) and (4.2), if we multiply them withχ (1) andχ (2) respectively and take their individual average, we arrive at d dt E (1) k = P 21 + P ξ1 + P γ1 , (4.38) d dt E (2) k = P 12 + P ξ2 + P γ2 , (4.39) where E (i) k is the mechanical energy of each subsystem, E (i) k = 1 2 mχ (i) 2 + 1 2 mω 2 R χ (i) 2 . (4.40) Eq. (4.37) then says that the mechanical energy of each subsystem is conserved when the whole system reaches relaxation. In addition, the condition of sta- tionarity P ξ1 + P γ1 = −P ξ2 − P γ2 (4.41) implies the energy of the whole system will go to a fixed value at late time Eq. (4.41) also says that in the end we must have P 21 + P 12 = 0. This is not obvious when compared with the corresponding closed systems. If there is no reservoir in contact with either subsystem, then although the total energy of the whole system (internal energy) is a constant value, the mechanical energy in each subsystem is not. The energy is transferred back and forth between subsystems via their mutual coupling mσ χ (1) χ (2) . Thus in the case of the closed systems, d dt   i=1, 2 1 2 mχ (i) 2 + 1 2 mω 2 R χ (i) 2   + mσ χ (1) χ (2) = 0 , as t → ∞ . A Mathematically More Concise Derivation In the above we sought a balance relation by displaying the energy flows between components explicitly. This has the advantage of seeing the physical processes in great detail and clarity. There is a mathematically more concise formulation of energy transport which we present here. We adopt a matrix notation for ease of generalization to the harmonic chain case treated in the following sections. Since Subsystem 2 exerts a force −mσ χ (2) on Subsystem 1, the average power delivered by Subsystem 2 to Subsystem 1 is P 21 (t) = −mσ χ (2) (t)χ (1) (t) = −mσ lim t →t d dτ χ (2) (t)χ (1) (t ) = −mσ ς 2 2 D 1 (t) ·Ḋ 1 (t) + 1 2m 2 ς 2 D 2 (t) ·Ḋ 2 (t) 21 + e 2 m 2 t 0 ds ds D 2 (t − s) · G ab H (s − s ) ·Ḋ b1 2 (t − s ) 21 , (4.43) where we have used (3.11) and the properties that D i are symmetric. As noted before the expressions within the square brackets approach zero at late times, so in the limit τ → ∞, the power P 21 (t) becomes P 21 (∞) = − e 2 σ m ∞ −∞ ds ds D 2 (s) · G H (s − s ) ·Ḋ 2 (s ) 21 . (4.44) Recall that since D ij 2 (s) = 0 for s < 0, we can extend the lower limit of the integration to minus infinity. Expressing the integrand by the Fourier transform of each kernel function yields P 21 (∞) = − e 2 σ m ∞ −∞ dω 2π −i ω D * 2 (ω) · G H (ω) · D 2 (ω) 21 . (4.45) It says that the average power delivered by Subsystem 2 (S 2 ) on Subsystem 1 (S 1 ) approaches a constant eventually. Next we examine the corresponding power transfer between the S 1 and its private bath (B 1 ). The average power delivered by the stochastic force (noise) from B 1 to S 1 is P ξ1 (t) = ξ (1) (t)χ (1) (t) = 1 m t 0 dsḊ (2) 1a (t − s) ξ 1 (t)ξ a (s) = e 2 m t 0 ds Ḋ 2 (t − s) · G H (t − s) 11 . (4.46) Hence at late times t → ∞, the average power P ξ1 becomes P ξ1 (∞) = e 2 m ∞ −∞ dω 2π i ω D * 2 (ω) ·G H (ω) 11 . (4.47) Likewise, the average power delivered by the dissipative force in S 1 from the backaction of B 1 is described by P γ1 (t) = −2mγ χ (1) 2 (t) = − 2e 2 γ m t 0 ds ds Ḋ 2 (s) · G H (s − s ) ·Ḋ 2 (s ) 11 . (4.48) Again, we have ignored contributions which are exponentially small at late times. The value of P γ1 in the limit t → ∞ is given by P γ1 (∞) = − 2e 2 γ m ∞ −∞ dω 2π ω 2 D * 2 (ω) · G H (ω) · D 2 (ω) 11 . (4.49) Therefore the net energy transfer at late times between B 1 and S 1 is P S1 = P ξ1 (∞) + P γ1 (∞) = e 2 m ∞ −∞ dω 2π i ω D 1a * 2 (ω) G 1a H (ω) + i 2γω D 1b 2 (ω) G ab H (ω) = e 2 m ∞ −∞ dω 2π i ω D 1a * 2 I 1b + i 2γω D 1b 2 (ω) G ab H (ω) = e 2 m ∞ −∞ dω 2π i ω I + i 2γω D 2 (ω) · G H (ω) · D † 2 (ω) 11 ,(4.50) where we have used the symmetric property of G H . We next note that the Fourier transform D 2 (ω) satisfies D −1 2 (ω) = Ω Ω Ω 2 − ω 2 I − i 2γω I , ⇒ Ω Ω Ω 2 − ω 2 I − i 2γω I · D 2 (ω) = I , ⇒ Ω Ω Ω 2 − ω 2 I · D 2 (ω) = I + i 2γω D 2 (ω) ,(4.51) with I being a 2 × 2 identity matrix. Putting this result back into (4.50), we arrive at P S1 = e 2 m ∞ −∞ dω 2π i ω Ω Ω Ω 2 − ω 2 I · D 2 (ω) · G H (ω) · D † 2 (ω) 11 . (4.52) From the definition of Ω Ω Ω 2 , we see that Ω Ω Ω 2 − ω 2 I ab is in fact σ δ a1 δ b2 + δ a2 δ b1 , so that (4.52) becomes P S1 = − e 2 σ m ∞ −∞ dω 2π −i ω D * 2 (ω) · G H (ω) · D 2 (ω) 12 ,(4.53) where we have used the symmetry property of D 2 and G H . Compared to (4.45), we see that P S1 is equal to P 12 (∞) at late time. It means that if P S1 > 0, then the energy flow from the bath 1 to the subsystem 1 is equal to the energy flow from the subsystem 1 to subsystem 2. In addition from (4.45), we can show that at late time P 12 (∞) = −P 21 (∞) as follows. Since by construction P 21 (∞) is a real physical quantity, if we take the complex conjugate of P 21 (∞) we should return to the very same P 21 (∞), that is P 21 (∞) = P * 21 (∞) = − e 2 σ m ∞ −∞ dω 2π i ω D 2a 2 (ω) G ab H (ω) D b1 * 2 (ω) (4.54) = e 2 σ m ∞ −∞ dω 2π −i ω D 1b * 2 (ω) G ab H (ω) D a2 2 (ω) = −P 12 (∞) . Thus we also establish that P S1 = −P 21 (∞). To make connection with (4.35), we observe that from (3.26), we can relate the elements of the fundamental solution matrices with the corresponding fundamental solutions of the normal modes by D 11 2 (ω) = D 22 2 (ω) = 1 2 d (+) 2 (ω) + d (−) 2 (ω) = ω 2 R − ω 2 − i 2γω d (+) 2 (ω)d (−) 2 (ω) , D 12 2 (ω) = D 21 2 (ω) = 1 2 d (+) 2 (ω) − d (−) 2 (ω) = −σd (+) 2 (ω)d (−) 2 (ω) , from (4.34)-(4.33). This enable us to write (4.54) as P 21 (∞) = 16πγ 2 σ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) 2 d (−) 2 (ω) 2 G β2 H (ω) −G β1 H (ω) . (4.55) Thus we recover (4.35). Steady State Energy Flow at High and Low Temperatures We may define the steady energy flow J by (4.57) J ≡ P 21 = P ξ2 (∞) + P γ2 (∞) = 16πγ 2 σ 2 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) 2 d (−) 2 (ω) 2 G β2 H (ω) −G β1 H (ω) , In the high temperature limit β i → 0, we haveG βi H (ω) ≈ 1/(2πβ i ) so that the steady energy current becomes, when β i ω 1 J 8γ 2 σ 2 β −1 2 − β −1 1 ∞ −∞ dω 2π ω 2 d (+) 2 (ω) 2 d (−) 2 (ω) 2 ∝ (T 2 − T 1 ) . (4.58) The integral in (4.58) can be exactly carried out, and it is given by ∞ −∞ dω 2π ω 2 d (+) 2 (ω) 2 d (−) 2 (ω) 2 = 1 8γ 1 σ 2 + 4γ 2 ω 2 R ,(4.59) with ω 2 ± = ω 2 R ± σ. Therefore the steady energy current in the high temperature limit is given by J = γσ 2 σ 2 + 4γ 2 ω 2 R ∆T =    γ ∆T , γω R σ , σ 2 4γω 2 R ∆T , γω R σ ,(4.60) where ∆T = T 2 − T 1 , for different relative coupling strengths between the subsystems and the reservoirs. When γ → 0, that is, when the coupling between the subsystems and their baths is turned off, there is no energy flow. Likewise, if there is no coupling between the subsystems σ → 0, the energy flow also terminates, as also expected. Heat Conductance We define the thermal conductance K by the ratio of the steady current over the temperature difference between the reservoirs, K = lim ∆T →0 J ∆T . (4.61) Thus we find that in the high temperature limit βω R 1, the conductance K = γ σ 2 σ 2 + 4γ 2 ω 2 R ,(4.62) becomes independent of temperature but only depends on the parameters σ, γ and ω R . From Fig. 4.1, we see that the conductance monotonically increases with the inter-oscillator coupling σ, and gradually approaches the value γ as long as the constraint σ ≤ ω 2 R is still satisfied. On the other hand, when we fix the inter-oscillator coupling, the conductance rises up to a maximum value σ/4ω R at γ = σ/2ω R , and then gradually decreases to zero as the system-environment coupling γ increases. From the expression of d (±) 2 (ω) 2 we note that it traces out a Breit-Wigner resonance curve with respect to ω. The resonance feature is well-defined only when γ is sufficiently small, that is, γ ω R . The peak is located at about ω = (ω 2 R ± σ) 1/2 and the width of the peak is about 2γ. Therefore for a fixed value of Ω, the inter-oscillator coupling constant σ determines the location of the resonance peak, while the system-environment coupling constant γ determines the width of the resonance. The integrand (4.56) contains a product of d (+) 2 (ω) 2 d (−) 2 (ω) 2 , which indicates that there are two resonance peaks at (ω 2 R ± σ) 1/2 respectively. Hence the distance between these two peaks is ω 2 R + σ 1 2 − ω 2 R − σ 1 2 σ ω R . When the separation of peaks is much greater than the width, it has two distinct, well-defined peaks. If the separation becomes smaller than the characteristic width of each peak, σ < γω R , then the two peaks gradually fuse into one peak. This change of the dominant scale is reflected in the behavior of the conductance K in the respective regimes, K =    γ , γω R σ , σ 2 4γω 2 R , γω R σ ,(4.63) as can be seen from (4.60). In the low temperature limit βω ± 1, we may write the Bose-Einstein distribution factor in (4.56) as G β2 H (ω) −G β1 H (ω) = ω 2π ∞ n=1 e −nβ2ω − e −nβ1ω , and the steady state current becomes J = 8πγ 2 σ 2 π ∞ n=1 ∞ 0 dω ω 3 d (+) 2 (ω) 2 d (−) 2 (ω) 2 e −nβ2ω − e −nβ1ω = 48πγ 2 σ 2 πω 4 + ω 4 − 1 β 4 2 − 1 β 4 1 ∞ n=1 1 n 4 + · · · , = 8π 3 15 γ 2 σ 2 ω 4 R − σ 2 2 1 β 4 2 − 1 β 4 1 . (4.64) The summation over n gives ζ(4) = π 4 /90. This familiar number comes from the higher order expansions of the coth z function in the limit z → ∞. If we write the steady current (4.64) in terms of temperature, we obtain J = 8π 3 15 γ 2 σ 2 ω 4 R − σ 2 2 T 4 2 − T 4 1 = 8π 3 15 γ 2 σ 2 ω 4 R − σ 2 2 4T 3 ∆T + T ∆T 3 , (4.65) where ∆T = T 2 − T 1 and T = (T 2 + T 1 )/2. We see that the temperature dependence of the steady current is different from the high temperature limit. In the low temperature limit it is proportional to T 4 2 − T 4 1 . However, for fixed T, the current turns out more or less linearly proportional to the temperature difference between the reservoirs except for the case ∆T T, which is equivalent to T 1 < 3T 2 , where the contributions of the ∆T 3 terms appreciable. In the regime ∆T T, the steady state current is J 32π 3 15 γ 2 σ 2 ω 4 R − σ 2 2 T 3 ∆T ,(4.66) and then we may use the definition of the conductance (4.61) to find: K = 32π 3 15 γ 2 σ 2 ω 4 R − σ 2 2 T 3 . (4.67) Apparently in the low temperature limit, the conductance depends on T, which is the mean temperature of the two reservoirs. In Fig. 4.2, we plot the steady state current as a function of the temperature T 1 of Bath 1, with the temperature difference ∆T fixed, according to (4.56). We see in the high temperature limit, the current approaches a constant, independent of T 2 , T 1 , as long as ∆T is fixed, which is consistent with (4.60). Harmonic Chain We now extend the previous results to a one-dimensional chain of n harmonic In analogy with the case of two oscillators in the previous sections, here the column matrix χ χ χ has n entries, so does the row matrix χ χ χ T = (χ (1) , χ (2) , · · · , χ (n) ). The matrices Ω Ω Ω 2 , I and G are now spanned to n × n matrices, Ω Ω Ω 2 =           ω 2 σ 0 0 · · · 0 σ ω 2 σ 0 · · · 0 0 . . . . . . . . . . . . 0 0 · · · 0 σ ω 2 σ 0 · · · 0 0 σ ω 2           , I =           1 0 0 0 · · · 0 0 0 0 0 · · · 0 . . . . . . . . . . . . . . . . . . 0 · · · 0 0 0 0 0 · · · 0 0 0 1           , and G(s, s ) =           G β1 (s, s ) 0 0 0 · · · 0 0 0 0 0 · · · 0 0 . . . . . . . . . . . . 0 0 · · · 0 0 0 0 0 · · · 0 0 0 G βn (s, s )           . (5.1) In addition, we have the same stochastic effective action as in (3.4) except that now the stochastic force is a column vector ξ ξ ξ T = (ξ (1) , 0, · · · , 0, ξ (n) ) and its moments satisfy the Gaussian statistics ξ ξ ξ(s) = 0 , ξ ξ ξ(s) · ξ ξ ξ T (s ) = G H (s, s ) . (5.2) Note that although the matrix G in (5.1) is not invertible, it does not prevent us from writing down the stochastic effective action. In fact we can do it by components and then write them back into the tensor notation. Note the stochastic average is defined only with respect to the first and the final components of the (vectorial) stochastic force ξ ξ ξ. Taking the variation of the stochastic effective action with respect to q and letting q = 0, we arrive at the Langevin equation, mχ χ χ(s) + 2mγ I ·χ χ χ(s) + m Ω Ω Ω 2 R · χ χ χ(s) = ξ ξ ξ(s) . (5.3) where Ω Ω Ω 2 R is the same as Ω Ω Ω 2 in the structure except that we replace ω 2 by ω 2 R due to renormalization. Existence of a Steady Current The derivation of the energy currents between the components of the total system is similar to those presented in Sec. 4.3. Thus the net energy flow from Bath 1 (B 1 ) to Oscillator 1 (O 1 ) at late times is given by J 1 = P ξ1 (∞) + P γ1 (∞) = 8γ 2 ∞ −∞ dω ω 2 D(ω) · I · D * (ω) · G T H (ω) − D(ω) · G H (ω) · D † (ω) 11 = 8γ 2 ∞ −∞ dω ω 2 D 1n (ω) 2 G 11 H (ω) − G nn H (ω) . (5.4) Here we note that the prefactor D 1n (ω) 2 in the integrand of (5.4) has dependence on the length of the chain n. We will take a closer look at its behavior later. J k,k+1 = −i e 2 σ m ∞ −∞ dω 2π ω D(ω) · G H (ω) · D † (ω) k,k+1 (5.5) = −i 4γσ ∞ −∞ dω ω D k,1 (ω) D k+1,1 * (ω) G 11 H (ω) + D k,n (ω) D k+1,n * (ω) G nn H (ω) . Eq. (5.5) does not have any reference to time, so the energy current from O k to O k+1 is also a time-independent constant. To show the equality between (5.4) and (5.5), we will relate D k,1 (ω) D k+1,1 * (ω) or D k,n (ω) D k+1,n * (ω) to D 1n (ω) 2 so that we can factor out the noise kernel G H (ω) in (5.5). These relations are provided in Appendix A. Using the results in (7.27) and (7.30) D k,1 D k+1,1 * = + i c σ D 1n 2 + · · · , (5.6) D n−k,1 * D n−k+1,1 = − i c σ D 1n 2 + · · · ,(5.7) where . . . denotes terms which will have vanishing contributions to the integral (5.5), we can rewrite (5.5) as J k,k+1 = 8γ 2 ∞ −∞ dω ω 2 D 1n (ω) 2 G 11 H (ω) − G nn H (ω) ,(5.8) recalling that c = 2γω. We immediately see that for any neighboring oscillators O k and O k+1 located along the chain, the energy current J k,k+1 between them is exactly the same as the current J 1 transported from Bath 1 to Oscillator O 1 in (5.4). As a final touch, we compute the energy current from Bath B n , located at the opposite end of the chain, to Oscillator O n . From (5.4), we find this current is given by J n = 8γ 2 ∞ −∞ dω ω 2 D(ω) · I · D * (ω) · G T H (ω) − D(ω) · G H (ω) · D † (ω) nn = −8γ 2 ∞ −∞ dω ω 2 D 1n (ω) 2 G 11 H (ω) − G nn H (ω) . (5.9) It has the same magnitude as J 1 in (5.4) and J k,k+1 in (5.8), but opposite in sign, which says that the current flows from Oscillator O n to Bath B n , as is also expected. In summary, given a quantum harmonic oscillator chain, where each oscil- D 1n (ω) 2 = σ 2n−2 |θ n | 2 , (5.11) where |θ n | 2 = f 2 2 c 4 + 2f 1 c 2 + f 2 0 + 2σ 2(n−1) c 2 > 0 . (5.12) Once f 0 , f 1 , f 2 is found we can derive the analytic expression of |θ n | 2 . Indeed we have shown this in the Appendix, where the general expression of f k is given by f k = µ n−k+1 1 − µ n−k+1 2 µ 1 − µ 2 ,(5.13) with the roots of the characteristic equation given by µ 1 = a + √ a 2 − 4σ 2 2 , µ 2 = a − √ a 2 − 4σ 2 2 ,(5.14) we immediately see that when a 2 < 4σ 2 , that is, when ω lies in the interval √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ, two roots µ 1 , µ 2 are complex-conjugated. It in turn implies that θ n , as well as D 1n (ω) 2 , will be highly oscillatory with ω in this interval. Analytically θ n 2 is described by |θ n | 2 = σ 2n sin 2 ψ sin 2 (n + 1)ψ + 2 c 2 σ 2 1 + sin 2 nψ + We also argue in the Appendix that outside the interval √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ, the transmission coefficient falls to a vanishingly small value very rapidly. Therefore, generically speaking ω 2 D 1n (ω) 2 will have a comb-like structure within the band √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ, as is shown in Fig 5.1. The lower envelope of the comb structure is traced by sin 2 ψ, while the upper envelope is determined by the subleading term in (5.15) because the maxima occur approximately at the locations where sin 2 (n + 1)ψ vanishes. In addition, recall that c = 2γω, so the upper envelope of the comb structure is almost constant. The number of the spikes of the comb structure is equal to the number of the oscillators in the harmonic chain. However, since the bandwidth of the interval is independent of n, the width of each spike will scale as n −1 . This implies, as is argued in the Appendix, that the contribution from each spike to the integral over ω in (5.10) also scales as n −1 . The argument supplied in the Appendix improves with growing n, so we see for sufficiently large n, the scaling behavior of the contribution of each spike to the steady current will be nicely counteracted by the increasing number of spikes with the band √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ. Therefore it implies that the NESS energy current, in the case of the harmonic chain, is independent of the length of the chain. We show in Fig. 5.2 an exam- ple of the scaling behavior of the NESS energy current along the chain. The length of the chain ranges from n = 3 to n = 30. We see that the value quickly converges to an almost n-independent constant. While this qualitative behavior for a perfect quantum harmonic chain (with no defect, impurity or nonlinearity) may be well-known or can be reasoned out, we have hereby provided an explicit quantitative proof of it. Concluding Remarks The setup of an open system interacting with two heat baths serves as the basis for a wide range of investigations in physics, chemistry and biology. The existence of a nonequilibrium steady state in such a system is an issue of fundamental importance because, to name just one, it is the pre-condition for nonequilibrium thermodynamics, which serves as a powerful springboard for investigations in many areas of sciences and engineering. The existence and uniqueness of NESS have been studied for classical systems decades ago with rigorous mathematical proofs. We want to do the same now for quantum open systems, starting with the simpler case of continuous variable harmonic systems (in contradistinction to discrete variables such as spin chains, a subject which has seen many flowering results). The key findings of this investigation have been enumerated in the Introduction so there is no need to repeat them here. A few general concluding remarks would suffice. The broader value of this work as we see it is twofold: 1) A demonstration of the existence of a NESS for the system of interest. Rather than constructing mathematical proofs we provide the full dynamics of the system and derive ex- Appendix In this Appendix, we will derive the relations between D k,1 (ω) D k+1,1 * (ω) or D k,n (ω) D k+1,n * (ω) to D 1n (ω) 2 . They are used in Sec. 5 to set up the equality of the energy current between the neighboring sites along the chain. To this goal, we first establish some useful relations between the elements of the fundamental solution matrix D. From the definition of the fundamental solution, −ω 2 I − i 2γω I + Ω Ω Ω 2 · D(ω) = I , we see that the matrix D −1 (ω) = −ω 2 I − i 2γω I + Ω Ω Ω 2 (7.2) is symmetric with respect to the diagonal and the anti-diagonal, its inverse, the fundamental solution matrix D, also has these properties, that is, D j,k = D k,j , D j,k = D n+1−k,n+1−j . (7.3) In particular, it implies D j,1 = D 1,j = D n+1−j,n ,(7.4) and (5.5) becomes J j,j+1 = −i 4γσ ∞ −∞ dω ω D j,1 (ω) D j+1,1 * (ω) G 11 H (ω) + D n−j,1 * (ω) D n−j+1,1 (ω) G nn H (ω) . (7.5) Now the problem reduces to identifying a relation between D j,1 (ω) D j+1,1 * (ω) or D n−j,1 * (ω) D n−j+1,1 (ω) and D 1n (ω) 2 . To find the explicit expressions for the elements of the fundamental matrix D, we can invert (7.2). Since D −1 forms a tridiagonal matrix, its inverse can be given by the recursion relations D j,1 = −1 j+1 σ j−1 Υ j+1 θ n , j > 1 ,(7.6) where Υ n = a n , Υ n+1 = 1 and Υ j+1 = a j+1 Υ j+2 − σ 2 Υ j+2 , Υ j ∈ C ,(7.7) with j = 1, 2, . . . , n − 1 and Υ j ∈ C. The quantity θ n is the determinant of the inverse of the fundamental matrix, that is, θ n = det D −1 , and satisfies the recursion relation θ k = a k θ k−1 − σ 2 θ k−2 ,(7.8) with θ 0 = 1, θ −1 = 0 and θ k ∈ C. We introduce two shorthand notations a an a by assigning a ≡ a 1 = a n = −ω 2 − i 2γω + ω 2 R , and a ≡ a 2 = · · · = a n−1 = −ω 2 + ω 2 R . We note that a is a complex number and its imaginary part is an odd function of ω. It then proves useful to express Υ j+1 explicitly in terms of a . The motivation behind this lies in the fact that inside the square brackets of (7.5), only terms that are odd with respect to ω can have nontrivial contributions to the current. On the other hand, the only source that may contribute to the odd power in ω is the imaginary part of a . From the recursion relation (7.7), we see that in general the variable Υ j can be expanded by f j Υ j = f j a − f j+1 σ 2 , (7.9) where f j is an (n − j)-order polynomial of a with f n = 1, f n+1 = 0, and it satisfies the recursion relation, f j = a f j+1 − σ 2 f j+2 , f j ∈ R (7.10) Here we write down the first a few entries in the sequence {f j }, f n+1 = 0 , f n = 1 , f n−1 = a , f n−2 = a 2 − σ 2 , f n−3 = a 3 − 2aσ 2 , f n−4 = a 4 − 3a 2 σ 2 + σ 4 , f n−5 = a 5 − 4a 3 σ 2 + 3aσ 4 , f n−6 = a 6 − 5a 4 σ 2 + 6a 2 σ 4 − σ 4 . (7.11) Although f j follows a similar recursion relation to (7.7), introduction of f j makes it easier to identify the imaginary part of Υ j . Therefore once we find the general solution of f j via the recursion relation (7.10), we will have Υ j , which is useful to construct the general expression for the elements of the fundamental solution matrix D. The recursion relation (7.10) can be solved if we substitute f k ∝ µ −k (7.12) into (7.10), we find that µ will satisfy a characteristic equation µ 2 − a µ + σ 2 = 0 . (7.13) The two solutions, labeled by µ 1 and µ 2 , µ 1 = a + √ a 2 − 4σ 2 2 , µ 2 = a − √ a 2 − 4σ 2 2 ,(7.14) are assumed distinct, and then the general solution for f k is given by f k = p 1 µ −k 1 + p 2 µ −k 2 . (7.15) We can use the conditions f n = 1, f n+1 = 0 to fix the unknown coefficients p 1 and p 2 , f n = p 1 µ −n 1 + p 2 µ −n 2 = 1 , (7.16) f n+1 = p 1 µ −n−1 1 + p 2 µ −n−1 2 = 0 ,(7.17) so they are p 1 = µ n+1 1 µ 1 − µ 2 , p 2 = − µ n+1 2 µ 1 − µ 2 . (7.18) Thus the general solution of f k takes the form f k = µ n−k+1 1 − µ n−k+1 2 µ 1 − µ 2 = n−k m=0 µ n−k−m 1 µ m 2 . (7.19) With the help of these results, we can set up relations between D k,1 (ω) D k+1,1 * (ω) or D n−k,1 * (ω) D n−k+1,1 (ω) and D 1n (ω) 2 . To gain some insight, we first write down D 1n (ω) 2 by (7.6). Since D 1n = −1 n+1 σ n−1 Υ n+1 θ n = −1 n+1 σ n−1 θ n ,(7.20) we find D 1n 2 = σ 2n−2 |θ n | 2 . (7.21) Next, D s,1 D s+1,1 * can be given by D s,1 D s+1,1 * = −1 s+1 σ s−1 Υ s+1 θ n −1 s+2 σ s Υ * s+2 θ * n = −σ 2s−1 Υ s+1 Υ * s+2 |θ n | 2 . (7.22) We can expand the product Υ s+1 Υ * s+2 by (7.9), and get Υ s+1 Υ * s+2 = f s+1 a − f s+2 σ 2 f s+2 a * − f s+3 σ 2 = − f s+1 f s+3 a + f 2 s+2 a * σ 2 + · · · ,(7.23) where . . . are terms that will not contribute to the integral (7.5). Now recall that the imaginary part of a is odd with respect to ω, so we write a explicitly as a = a − i c, where a = −ω 2 + ω 2 R has been defined before while c is equal to 2γω. In so doing, we are able to condense (7.23) further to highlight its dependence on the imaginary part of a , that is, c, Υ s+1 Υ * s+2 = i c f s+1 f s+3 − f 2 s+2 σ 2 + · · · ,(7.24) The expression in the parentheses can be evaluated by (7.19), and we find f s+1 f s+3 − f 2 s+2 = −σ 2(n−s−2) ,(7.25) where we have used the fact that µ 1 µ 2 = σ 2 in (7.13) at the final step. Thus eq. (7.24) becomes Υ s+1 Υ * s+2 = −i c σ 2(n−s−1) + · · · . (7.26) We put it back to (7.22) and arrive at D s,1 D s+1,1 * = +i c σ 2n−3 |θ n | 2 + · · · = + i c σ D 1n 2 + · · · , (7.27) where we have compared the result with (7.21). Again the dots represent terms that do not contribute to the integral (7.5). Next we proceed to evaluate D n−s,1 * D n−s+1,1 . By (7.6), we obtain D n−s,1 * D n−s+1,1 = −1 n−s+1 σ n−s−1 Υ * n−s+1 θ * n −1 n−s+2 σ n−s Υ n−s+2 θ n = −σ 2(n−s)−1 Υ * n−s+1 Υ n−s+2 |θ n | 2 . (7.28) The factor Υ * n−s+1 Υ n−s+2 is then further expanded by f j as shown in (7.9), and we identify the imaginary part of a , Υ * n−s+1 Υ n−s+2 = f n−s+1 a * − f n−s+2 σ 2 f n−s+2 a − f n−s+3 σ 2 = − f 2 n−s+2 a + f n−s+1 f n−s+3 a * σ 2 + · · · = −i c f n−s+1 f n−s+3 − f 2 n−s+2 σ 2 + · · · = i c σ 2s−2 + · · · , (7.29) where we have used (7.19) and the fact a = a − i c. This implies D n−s,1 * D n−s+1,1 = −i c σ 2n−3 |θ n | 2 + · · · = − i c σ D 1n 2 + · · · , (7.30) where . . . will have vanishing contributions to the integral (7.5). Eqs. (7.27) and (7.30) are the sought-after relations between D s,1 D s+1,1 * or D n−s,1 * D n−s+1,1 and D 1n 2 . Next we turn to the scaling behavior of D 1n 2 with n. From (7.21) and following the procedures that lead to the general expression of Υ j , we find |θ n | 2 = f 2 2 c 4 + 2f 1 c 2 + f 2 0 + 2σ 2(n−1) c 2 > 0 , (7.31) where we have used (7.19) for the case k = n f 2 1 − f 0 f 2 = σ 2(n−1) , (7.32) to simplify |θ n | 2 . To draw further information about |θ n | 2 , we would like to discuss the generic behavior of f k with respect to ω. We first note that when a 2 − 4σ 2 < 0, the two solutions µ 1 , µ 2 of the characteristic equation (7.13) become complexconjugated. If we write them in terms of polar coordinate, then we have µ 1 = σ e i ψ , µ 2 = σ e −i ψ , ψ = tan −1 √ 4σ 2 − a 2 a ,(7.33) with 0 ≤ ψ ≤ π. Recall that a = −ω 2 + Ω 2 and c = 2ωγ. In terms of the frequency the condition a 2 − 4σ 2 < 0 corresponds to the frequency band Ω 2 − 2σ < ω < Ω 2 + 2σ , (7.34) within which the parameter a monotonically decreases from +2σ to −2σ and ψ steadily grows from 0 to π as ω increases. Hence f k can be written as f k = σ n−k sin(n − k + 1)ψ sin ψ , (7.35) which is heavily oscillating in ω. If we substitute (7.35) into (7.31), we find that |θ n | 2 becomes |θ n | 2 = σ 2n sin 2 (n + 1)ψ sin 2 ψ + 2σ 2(n−1) c 2 sin 2 nψ sin 2 ψ + σ 2(n−2) sin 2 (n − 1)ψ sin 2 ψ + 2σ 2(n−1) c 2 = σ 2n sin 2 ψ sin 2 (n + 1)ψ + 2 c 2 σ 2 1 + sin 2 nψ + c 4 σ 4 sin 2 (n − 1)ψ . (7.36) It can be greatly simplified when the coupling with the environment is weak such that γΩ σ. In this case |θ n | 2 reduces to |θ n | 2 ∼ σ 2n sin 2 ψ sin 2 (n + 1)ψ + ε . (7.37) Here ε is a small positive number as a reminder that the expression in the squared brackets is not supposed to totally vanish so that when we put |θ n | 2 back to D 1n (ω) 2 , it will not introduce artefact poles, D 1n (ω) 2 = 1 σ 2 sin 2 ψ sin 2 (n + 1)ψ + ε . (7.38) Owing to the factor sin 2 (n+1)ψ in the denominator, D 1n (ω) 2 will have maxima at ψ = kπ/(n+1) for k = 1, 2, . . . , n. As for k = 0, or n+1, since the numerator sin 2 ψ cancels with the denominator sin 2 (n + 1)ψ, there is no maximum of D 1n (ω) 2 at these two locations. It implies that there are n peaks distributed evenly 9 within the band √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ. As n increases, the peaks become narrower with the width of the order π/(n + 1). 9 in particular when σ Ω 2 because in that limit, ω = Ω 2 − 2σ cos ψ ∼ Ω − σ Ω cos ψ . For the neighboring maxima, the separation between them is given by where ∆ = π/n 1. Thus the separation is independent of k in the neighborhood k∆ 1 and π − k∆ 1. On the other hand, outside the band √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ, because we have a 2 − 4σ 2 > 0, both roots of the characteristic equations are real with µ 1 > µ 2 > 0 and |µ 1 | > |µ 2 | , a > 2σ , (7.39) |µ 2 | > |µ 1 | , a < −2σ . (7.40) Hence we note that when a > 2σ, µ k 1 rapidly dominates over µ k 2 as k increases, but when a < −2σ, µ k 2 quickly outgrows µ k 1 for large enough k. Thus we have f k ∼            µ n−k+1 1 µ 1 − µ 2 , a > +2σ , µ n−k+1 2 µ 1 − µ 2 , a < −2σ ,(7.41) for n − k 1. If we further assume γΩ σ, then |θ n | 2 will be approximately given by with i = 1 for a > 2σ but i = 2 for a < −2σ. Now we note that although the strong inter-oscillator coupling is allowed, the coupling strength σ is required smaller than Ω 2 /2 to avoid instability of the system. We also observe that since a = −ω 2 + Ω 2 , we have a 2 − 4σ 2 > 0 outside the band ω < √ Ω 2 − 2σ < 0 < ω < √ Ω 2 + 2σ. It implies |θ n | 2 ∼ µ 2n+2 i a 2 − 4σ 2 1 + 2c 2 µ 2 i 1 + σ µ i 2n + · · · ,(7.µ 1 − σ = a + √ a 2 − 4σ 2 2 − σ > 0 , 0 < σ µ 1 < 1 , a > 2σ , (7.44) −µ 2 − σ = −a + √ a 2 − 4σ 2 2 − σ > 0 , −1 < σ µ 2 < 0 , a < −2σ . (7.45) We then may conclude that the factor σ µ i 2(n−1) 1 , (7.46) drops to zero very fast for sufficiently large n, if 0 < ω < √ Ω 2 − 2σ or ω > √ Ω 2 + 2σ. Thus, D 1n (ω) 2 will monotonically and rapidly falls to a relatively small value outside the band √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ. At this point we may draw some conclusions about D 1n (ω) 2 . In the frequency space, D 1n (ω) 2 falls monotonically and rapidly to a relatively small value outside the band √ Ω 2 − 2σ < ω < √ Ω 2 + 2σ; on the other hand, within the frequency band, it possesses comb-like structure. The number of spikes grows with the length of the chain, but the width of the spike, on the contrary, becomes narrower and narrower, inversely proportional to the length of the chain. Next how the behavior of the transmission coefficient helps to understand the dependence of the steady current on the length of the chain? A simpler question to ask is how the contribution from each spike in (7.38) will scale with n within the frequency band? We first make an observation for the integral I (k) n = k+1/2 n+1 π k−1/2 n+1 π dψ sin 2 ψ sin 2 (n + 1)ψ + (7.47) for the k th spike among n spikes confined within the interval 0 < ψ < π. The parameter is a very small positive number. Change of the variable ψ to = (n + 1)ψ gives I (k) n = 1 n + 1 The approximation improves for a larger value of n because the denominator of (7.48) becomes more slowly varying with ω. The contribution from the squared brackets is approximately the same for the spike at about the same locations within the interval 0 < ψ < π. Thus I (k) n will scale with n −1 for sufficiently large n. For example, let us pick one spike, say, at k = n 0 /5 for n = n 0 . Now suppose we rescale n from n = n 0 to n = 3n 0 , and then we see that the three spikes centered at about k = 3 0 /5 will have about the same height but only about one third of width, for sufficiently large n 0 . Therefore each spike in the n = 3n 0 case will contribute one third as much as that in the n = n 0 case to the integral in I (k) n , as we can see from (7.49). When we consider all the spikes with the range 0 < ψ < π, we expect I n = n k=1 I (k) n (7.50) should independent of n for sufficiently large n. Following this argument, we see the steady current J becomes J = 8γ 2 ∞ −∞ dω ω 2 D 1n (ω) 2 G 11 H (ω) − G nn H (ω) 16γ 2 σ 2 √ Ω 2 +2σ √ Ω 2 −2σ dω sin 2 ψ sin 2 (n + 1)ψ + ε ω 2 G 11 H (ω) − G nn H (ω) = 16γ 2 σ π 0 dψ sin 3 ψ sin 2 (n + 1)ψ + ε h(ω) , (7.51) with ω = Ω 2 − 2σ cos ψ, and the function h(ω) being given by h(ω) = ω G 11 H (ω) − G nn H (ω) . (7.52) Since only the denominator sin 2 (n + 1)ψ + ε is vert rapidly oscillating with ψ for large n, we can use the previous arguments to support that the steady current does not scale with n for sufficiently large n, that is J O(n 0 ) ,(7.53) when n > N 0 for some large positive integer N 0 . References the description of the dynamics of an open quantum system obtaining the time development of the reduced density operator pretty much captures its essence and evolution. We derive the reduced density operator with the influence functional and closed-time-path formalisms (for a 'no-thrill' introduction, see, e.g., Chapters 5, 6 of [59]). With this reduced density operator one can compute the time evolution of the expectation values of the operators corresponding to physical variables in the reduced system 3 Here we are interested in the energy flux (heat current) flowing between a chain of n identical coupled harmonic oscillators which together represent the system (S = n k=1 O k ). Let's call B 1 the bath which O 1 interacts with, and B 2 the bath oscillator O 2 interacts with. Thus B 1 , B 2 are affectionately named our oscillators' 'private' baths. n in a harmonic chain) at all temperatures and couplings with arbitrary strength 4 . The formal mathematical expressions of the energy current are given in • Eqs. (4.26), (4.28) and (4.32) as well as (4.53), (4.54) for a twooscillator chain, and • Eqs. (5.4), (5.5) and (5.9) for an n-oscillator chain, from which we can obtain a profile of energy currents between the components. 2. We have established the steady state value of the energy flux in (5.4), (5.5) and (5.9). Manifest equality and time-independence of these expressions implies stationarity. There is no buildup or deficit of energy in any of the components. • in the high temperature limit, (a) The steady energy current is proportional to the temperature difference between the baths, in (4.60), (b) The heat conductance is independent of the temperature of either bath, as is seen in (4.62), (c) The dependence of the conductance on two types of coupling constants is shown in (4.63) and in Fig. 4.1. • in the low temperature limit (a) the temperature dependence of the steady energy current, (4.64), (4.65) and in Fig. 4.2, and (b) the temperature dependence of the conductivity in (4.67). 6. We also plot the general dependence of the NESS energy current on the length of the chain n in Fig. (5.2), based on our analytical expressions (5.11), (5.12) and (5.13). It shows that • for small n, the NESS current does depend on the length in a nontrivial way; however χ χ χ(0) andχ χ χ(0) are not correlated. Notice there that our choice of the parameters m, σ, e and ω renders the fundamental solution matrices symmetric, so we do not explicit show the transposition superscript in the place it is needed. This is a good point to comment on the Langevin equation (3.5) and the derived results such as (3.11). Compared to the equation of motion of a closed system the Langevin equation describing the dynamics of a reduced system has two additional features, a stochastic forcing term (noise) on the RHS and a dissipative term on the LHS. The noise term is a representation of certain measure of coarse-graining of the environment and the backaction of the coarse-grained environment manifests as dissipative dynamics of the reduced system. In the influence functional framework the Langevin equation is obtained by taking the functional variation of the stochastic effective action S SE about the mean trajectory q → 0 in the evolution of the reduced system. One may wonder whether in this approach the derivation of the Langevin equation accounts only for the induced quantum effects from the environment but overlooks the intrinsic quantum nature of the reduced system. This is because the homogeneous solution of the Langevin equation has no explicit dependence on the stochastic variable ξ ξ ξ and thus insensitive to taking the noise distributional average defined by the probability functional P[ξ ξ ξ]. If one writes the initial conditions as quantum operators of the canonical variables, one may identify the homogeneous part of the complete solution as the quantum operators associated with the reduced sys- and (3.16) cannot be further simplified due to the fact there are two thermal baths with different temperature. The fluctuationdissipation relation in this case becomes a matrix relation e i S SE [q,r,ξ ξ ξ] . stochastic effective action given by(3.4),S SE [q, r, ξ ξ ξ] = t 0ds mq T (s) ·ṙ(s) − m q(s) · Ω Ω Ω 2 · r(s) + q T (s) · ξ ξ ξ(s)+ s 0 ds q T (s) · G R (s, s ) · r(s ) .(3.31) the mean trajectories q, r are solutions to the stochastic Langevin equation (3.7) with the boundary conditions q(t) = q b , q(0) = q a and r(t) = r b , r(0) = r a . Thus they and their time derivatives are functionals of the stochastic noise ξ ξ ξ. Explicitly, in terms of the boundary values, we can write r(s) as s look at the physics from these equations before delving into the calculations. The motion of any harmonic oscillator in the system, say, O 1 , is always affected by its neighboring oscillator(s), Oscillator 2 in the present two oscillator case, via their mutual coupling with strength mσ. O 1 is also driven into random motion by a stochastic force ξ (1) associated with quantum and thermal fluctuations of Bath 1. Fluctuations in Bath 1, described here by a scalar-field, induces a dissipative force (in general, it is a reactive force) on O 1 . Thus the harmonic oscillator O 1 , other than its own harmonic force, is simultaneously ( 1 ) 1the motions of O 1 and O 2 and arrive at mχ + (s) + 2mγχ + (s) + (s) + ξ (2) (s) = ξ + (s) , (4.4) mχ − (s) + 2mγχ − (s) + m ω 2 − χ − (s) = ξ (1) (s) − ξ (2) (s) = ξ − (s) (4.5) s − s )ξ + (s ) , (4.8) D i (t), as in(3.18). They differs only by an unit-step function θ(t), that is,D D D i (t) = θ(t) D i (t). Mathematically speaking they are totally different in nature; the formal is the inhomogeneous solution to the Langevin equation while the latter is the homogeneous solution with a special set of initial conditions. P 21 + 21P 12 = 0 but oscillate with time. The key difference between an open and a closed system may lie in the fact that, as t → ∞, the dynamics of an open system is determined largely by the reservoirs, at least from the viewpoint of the Langevin equation. Figure 4 . 41: variation of the conductance K with respect to the coupling constants σ or γ in the high temperature limit. Figure 4 . 42: variation of the steady state current J with respect to the temperature T 1 of the Bath 1 . The temperature difference ∆T between two reservoirs is fixed. oscillators. The oscillators at both ends, labelled as O 1 and O n , are attached to their own pivate baths of respective temperatures T 1 > T n . The remaining n−2 oscillators called collectively k = {2, 3, ..., n − 1} are insulated from these two baths, and only interact with their nearest neighbors bilinearly with coupling strength σ. lator interacts with its nearest neighbors via bilinear coupling, if the two endoscillators of the chain are placed in contact with two thermal baths of different temperatures, while the oscillators in between are kept insulated from those baths, we have explicitly shown that after a time when all the oscillators have fully relaxed, the energy flow along the chain becomes independent of time and the currents between the neighboring oscillators are the same in both2 D 1n (ω) 2 G 11 H (ω) − G nn H (ω) . (5.10)This implies that a NESS exists for a quantum harmonic oscillator chain and a steady current flows from the high temperature front along the chain to the low temperature end. There is no buildup or localization of energy at any site along the chain. We emphasize that in the transient phase before the constituent oscillators come to full relaxation, additional contributions from the homogeneous solutions of the oscillators' modes render the current between neighboring oscillators unequal, but in the course of the order of the relaxation time, the energy along the chain is re-distributed to a final constant value while the whole system settles down to a NESS.5.2. Scaling Behavior of the NESS CurrentWe have shown that after the motion of the constituents of the chain reaches relaxation, a steady thermal energy current exists flowing along the harmonic chain across from the hot thermal reservoir to the cold one. However we have not addressed the scaling behavior of the steady current with the length of the chain. To shed some light on this problem, we first analyze the prefactor D 1n (ω) 2 , which is proportional to the transmission coefficient in the Landauer formula. From (7.21), we have Figure 5 . 1 : 51The generic structure of ω 2 D 1n (ω) 2 . We choose n = 10 for example. Figure 5 . 2 : 52The scaling behavior of the NESS energy current J for a particular set of parameters. The number n ranges from 3 to 30. plicit expressions for the energy flow in each component leading to a proof that an energy flow balance relation exists. 2) Presenting a toolbox whereby one can derive the stochastic equations and calculate the average values of physical variables in open quantum systems -this involves both taking the expectation values of quantum operators of the system and the distributional averages of stochastic variables originating from the environment. The functional method we adopt here has the advantage that it is compact and powerful, and it can easily accommodate perturbative techniques and diagrammatic methods developed in quantum field theory to deal with weakly nonlinear open quantum systems, as we will show in a sequel paper. The somewhat laborious and expository construction presented in this paper is necessary to build up a platform for systematic investigations of nonequilibrium open quantum systems, some important physical issues therein will be discussed in future communications. To expand the second point somewhat, our approach is characterized by two essential features; a) we use a microphysics model of generic nature, namely here, a chain of harmonic oscillators interacting with two baths described by two scalar fields at different temperatures. b) this allows us to derive everything from first principles, e.g., starting with an action principle describing the interaction of all the microscopic constituents and components in the model. This way of doing things has the advantage that one knows the physics which goes into all the approximations made, in clearly marked stages. For Gaussian systems, namely, bilinear coupling between harmonic oscillators and with baths, one can solve this problem exactly, providing the fully nonequilibrium evolution of the open system with the influences of it environment (the two heat baths here) accounted for in a self-consistent manner. Self-consistency is an absolutely essential requirement which underlies the celebrated relation of Onsager, for example (one may refer to the balance relations we obtained here as the quantum Onsager relations) and realization of the symmetries in the open systems which were used for the mathematical proofs of NESS. This consistency condition is not so well appreciated in the open quantum system literature, but we see it as crucial in the exploitation of the symmetry principles mentioned above as well as in treating physical processes when memory effects (in non-Markovian processes) and when the effects of backaction are important. This includes situations when one wants to a) treat strongly correlated systems or systems subjected to colored noises b) design feedback control of quantum systems c) engineer an environment with sensitive interface with the open quantum system, to name a few. Acknowledgment This work began in the summer of 2013 when both authors visited Fudan University's Center for Theoretical Physics at the invitation of Prof. Y. S. Wu. Earlier that year BLH visited the group of Prof. Baowen Li at the National University of Singapore. Thanks are due to them for their warm hospitality. The leitmotiv to understand nonequilibrium energy transport began when BLH attended a seminar by Prof. Bambi Hu at Zhejiang University in 2010. He also thanks Prof. Dhar for useful discussions. JTH's research is supported by the National Science Council, R.O.C. k+1)∆−cos k∆ = − σ Ω cos k∆ cos ∆−sin k∆ sin ∆−cos k∆ σ Ω ∆ sin k∆+O(∆ 2 ) , n, the numerator of the integrand is slowly varying compared with the denominator, so it can be pulled out of the integral and is evaluated for = kπ, This could be any of the three kinds mentioned earlier: see e.g.,[63,76,67] respectively for derivations of the master, the Fokker-Planck (Wigner) and the Langevin equations. In the sense described in Footnote 1, a reduced system is an open system whose dynamics includes the backaction of its coarse-grained environment. This can be viewed as an idealization of finite-size bath in the limit that the bath degrees of freedom is sufficiently large and the size of the bath is much larger than the scales associated with the oscillator's motion. We assume that the coupling strength e i is not so strong as to displace the oscillators. For a discussion of the physical consequences of factorizable initial conditions and generalizations, see e.g.,[63,65,83,84]. Start with two noninteracting oscillators, each interacts with its own thermal bath. The system is described by two decoupled yet almost identical Langevin equations, the only difference is in the noises of the two baths at different temperatures. Now turn on the interaction between the two oscillators, then each Langevin equation should acquire an extra force term associated with the other oscillator's variable. Stationary non-equilibrium states of infinite harmonic systems. E G See, H Spohn, J L Lebowitz, Commun. Math. Phys. 5497See, e.g., H. Spohn and J. L. Lebowitz, "Stationary non-equilibrium states of infinite harmonic systems", Commun. Math. Phys. 54, 97 (1977); Dynamical ensembles in stationary states. G Gallavotti, E D G Cohen, J. Stat. Phys. 80931G. Gallavotti and E. D. G. Cohen , "Dynamical ensembles in stationary states", J. Stat. Phys. 80, 931 (1995); Exponential convergence to non-equilibrium stationary states in classical statistical mechanics. L Rey-Bellet, L E Thomas, Commun. Math. Phys. 225305L. Rey-Bellet and L. E. Thomas, "Exponential convergence to non-equilibrium stationary states in classi- cal statistical mechanics", Commun. Math. Phys. 225, 305 (2002); . J , J.- Non-equilibrium steady states: fluctuations and large deviations of the density and of the current. P Eckmann, ; B Derrida, arXiv:math-ph/0304043Beijing Lecture. 07023J. Stat. Mech.. and references thereinP. Eckmann, "Non-equilibrium steady states" Beijing Lecture (2002), arXiv:math-ph/0304043; B. Derrida, "Non-equilibrium steady states: fluctuations and large deviations of the density and of the current", J. Stat. Mech. P07023 (2007) and references therein. E G See, S R De Groot, P Mazur, Non-equilibrium thermodynamics. North-HollandDoverSee, e.g., S. R. de Groot and P. Mazur, Non-equilibrium thermodynam- ics (North-Holland, 1962; Dover, 1984); . J Vollmer, Phys. Rep. 372and references thereinJ. Vollmer, Phys. Rep. 372, 131 (2002) and references therein; Entropy production and thermodynamics of nonequilibrium stationary states: a point of view. G Gallavotti, Chaos. 14680G. Gallavotti, "Entropy production and thermodynamics of nonequilibrium stationary states: a point of view", Chaos 14, 680 (2004); Steady state thermodynamics. S Sasa, H Tasaki, J. Stat. Phys. 125and references thereinS. Sasa and H. Tasaki, "Steady state thermody- namics", J. Stat. Phys. 125, 125 (2006) and references therein. R Zwanzig, Nonequilibrium Statistical Mechanics. OxfordOxford University PressR. Zwanzig, Nonequilibrium Statistical Mechanics (Oxford University Press, Oxford, 2001). C W Gardiner, P Zoller, Quantum Noise. BerlinSpringer2nd Enlarge EditionC. W. Gardiner and P. Zoller, Quantum Noise, 2nd Enlarge Edition (Springer, Berlin, 2000). U Weiss, Quantum Dissipative Systems. SingaporeWorld ScientificU. Weiss, Quantum Dissipative Systems (World Scientific, Singapore, 1993). E Joos, H D Zeh, C Kiefer, D J W Guilini, Decoherence and the Appearance of a Classical World in Quantum Theory. BerlinSpringerE. Joos, H. D. Zeh, C. Kiefer, D. J. W. Guilini, J. Kupsch and I.-I. Sta- matescu, Decoherence and the Appearance of a Classical World in Quan- tum Theory (Springer, Berlin, 2003). Irreversible thermodynamics for quantum systems weakly coupled to thermal reservoirs. E G See, H Spohn, J L Lebowitz, Adv. Chem. See, e.g., H. Spohn and J. L. Lebowitz, " Irreversible thermodynamics for quantum systems weakly coupled to thermal reservoirs", Adv. Chem. . Phys. 38109Phys. 38, 109 (1978); Extraction of work from a single thermal bath in the quantum regime. A E Allahverdyan, Th M Nieuwenhuizen, Phys. Rev. Lett. 851799A. E. Allahverdyan and Th. M. Nieuwenhuizen, "Extraction of work from a single thermal bath in the quantum regime", Phys. Rev. Lett. 85, 1799 (2000); Allahverdyan, 'Statistical thermodynamics of quantum Brownian motion: construction of perpetuum mobile of the second kind. M Th, A E Nieuwenhuizen, Phys. Rev. E. 6636102Th. M. Nieuwenhuizen and A. E. Al- lahverdyan, 'Statistical thermodynamics of quantum Brownian motion: construction of perpetuum mobile of the second kind", Phys. Rev. E 66, 036102 (2002). J Gemmer, M Michel, G Mahler, Quantum Thermodynamics: Emergence of Thermodynamic Behavior within Composite Quantum Systems. BerlinSpringer784Second EditionJ. Gemmer, M. Michel, and G. Mahler, Quantum Thermodynamics: Emergence of Thermodynamic Behavior within Composite Quantum Sys- tems, Second Edition, Lecture Notes in Physics, Vol. 784 (Springer, Berlin, 2009). Properties of a harmonic crystal in a stationary non-equilibrium state. Z Rieder, J L Lebowitz, E Lieb, J. Math. Phys. 81073Z. Rieder., J. L. Lebowitz, and E. Lieb, "Properties of a harmonic crystal in a stationary non-equilibrium state", J. Math. Phys. 8, 1073 (1967); Heat flow in regular and disordered harmonic chains. A Casher, J L Lebowitz, J. Math. Phys. 121701A. Casher and J. L. Lebowitz, "Heat flow in regular and disordered har- monic chains", J. Math. Phys. 12, 1701 (1971); . A J O&apos;connor, J , A. J. O'Connor and J. L. Heat conduction and sound transmission in isotopically disordered harmonic crystals. Lebowitz, J. Math. Phys. 15692Lebowitz, "Heat conduction and sound transmission in isotopically disor- dered harmonic crystals", J. Math. Phys. 15, 692 (1974); Stationary non-equilibrium states of infinite harmonic systems. H Spohn, J L Lebowitz, Commun. Math. Phys. 5497H. Spohn and J. L. Lebowitz, "Stationary non-equilibrium states of infinite harmonic systems", Commun. Math. Phys. 54, 97 (1977). Non-equilibrium statistical mechanics of anharmonic chains coupled to two heat baths at different temperatures. J.-P Eckmann, C.-A Pillet, L Rey-Bellet, Commun. Math. Phys. 201657J.-P. Eckmann, C.-A. Pillet, and L. Rey-Bellet, "Non-equilibrium sta- tistical mechanics of anharmonic chains coupled to two heat baths at different temperatures", Commun. Math. Phys. 201, 657 (1999); . J , J.- Non-equilibrium statistical mechanics of strongly anharmonic chains of oscillators. P Eckmann, M Hairer, Commun. Math. Phys. 212105P. Eckmann and M. Hairer, "Non-equilibrium statistical mechanics of strongly anharmonic chains of oscillators", Commun. Math. Phys., 212, 105 (2000); Exponential convergence to non-equilibrium stationary states in classical statistical mechanics. L Rey-Bellet, L E Thomas, Commun. Math. Phys. 225305L. Rey-Bellet and L. E. Thomas, "Exponential convergence to non-equilibrium stationary states in classical statistical mechanics", Commun. Math. Phys. 225, 305 (2002). Entanglement and the foundations of statistical mechanics. S Popescu, A J Short, A Winter, Nat. Phys. 2754S. Popescu, A. J. Short, and A. Winter, "Entanglement and the founda- tions of statistical mechanics", Nat. Phys. 2, 754 (2006); . S Goldstein, J , S. Goldstein, J. Canonical typicality. L Lebowitz, R Tumulka, N Zanghi, Phys. Rev. Lett. 9650403L. Lebowitz, R. Tumulka, and N. Zanghi, "Canonical typicality", Phys. Rev. Lett. 96, 050403 (2006); Quantum mechanical evolution towards thermal equilibrium. N Linden, S Popescu, A J Short, A Winter, Phys. Rev. E. 7961103N. Linden, S. Popescu, A. J. Short, and A. Winter, "Quantum mechanical evolution towards thermal equilibrium", Phys. Rev. E 79, 061103 (2009); Canonical thermalization. P Reimann, New J. Phys. 1255027P. Reimann, "Canonical thermalization", New J. Phys. 12, 055027 (2010); Equilibration of quantum systems and subsystems. A J Short, New J. Phys. 1353009A. J. Short, "Equilibration of quantum systems and subsystems", New J. Phys. 13, 053009 (2011). H P Breuer, F Petruccione, The Theory of Open Quantum Systems. OxfordOxford University PressH.P. Breuer, F. Petruccione, The Theory of Open Quantum Systems, (Ox- ford University Press, Oxford, 2002) Equilibrium states of open quantum systems in the strong coupling regime. Y Subasi, C Fleming, J Taylor, B L Hu, Phys. Rev. E. 8661132Y. Subasi, C. Fleming, J. Taylor and B. L. Hu, "Equilibrium states of open quantum systems in the strong coupling regime", Phys. Rev. E 86, 061132 (2012). A note on symmetry reductions of the Lindblad equation: transport in constrained open spin chains. B Buca, T Prosen, New J. Phys. 1473007See e.g., B. Buca and T. Prosen, "A note on symmetry reductions of the Lindblad equation: transport in constrained open spin chains", New J. Phys. 14 073007 (2012); Exact steady state manifold of a boundary driven spin-1 Lai-Sutherland chain. E Ilievski, T Prosen, Nucl. Phys. B. 882485E. Ilievski, T. Prosen, "Exact steady state manifold of a boundary driven spin-1 Lai-Sutherland chain ", Nucl. Phys. B 882, 485 (2014). Irreducible quantum dynamical semigroups. D E Evans, Comm. Math. Phys. 54293D. E. Evans, "Irreducible quantum dynamical semigroups", Comm. Math. Phys. 54, 293 (1977); An algebraic condition for the approach to equilibrium of an open N -level system. H Spohn, Lett. Math. Phys. 233H. Spohn, "An algebraic condition for the approach to equilibrium of an open N -level system", Lett. Math. Phys. 2, 33 (1977); Symmetries of generating functionals of Langevin processes with colored multiplicative noise. C Aron, G Biroli, L F Cugliandolo, J. Stat. Mech. 11018C. Aron, G. Biroli and L. F. Cugliandolo, "Symmetries of generating func- tionals of Langevin processes with colored multiplicative noise", J. Stat. Mech. (2010) P11018; Symmetry and the thermodynamics of currents in open quantum systems. D Manzano, P I Hurtado, arXiv:1310.7370D. Manzano and P. I. Hurtado, "Symmetry and the thermodynamics of currents in open quantum systems", arXiv:1310.7370 On the anomalous thermal conductivity of one-dimensional lattices. S Lepri, R Livi, A Politi, Phys. Rev. Lett. 78271Europhys. Lett.S. Lepri, R. Livi, and A. Politi, "Heat conduction in chains of nonlinear oscillators", Phys. Rev. Lett. 78, 1869 (1997); "On the anomalous thermal conductivity of one-dimensional lattices", Europhys. Lett. 43, 271 (1998). Fourier law: a challenge to theorists. F Bonetto, J L Lebowitz, L Rey-Bellet, arXiv:math-ph/0002052Mathematical Physics. Imp. Coll. PressF. Bonetto, J. L. Lebowitz, and L. Rey-Bellet, "Fourier law: a challenge to theorists" in Mathematical Physics 2000, (Imp. Coll. Press, London 2000), pp. 128-150, arXiv:math-ph/0002052. Studies of nonlinear problems. E Fermi, J Pasta, S Ulam, Los Alamos report. 1940E. Fermi, J. Pasta, S. Ulam, "Studies of nonlinear problems", Los Alamos report, LA-1940 (1955). Heat conduction in one-dimensional nonintegrable systems. B Hu, B Li, H Zhao, Phys. Rev. E. 573828Phys. Rev. EB. Hu, B. Li and H. Zhao, "Heat conduction in one-dimensional chains", Phys. Rev. E 57, 2992 (1998); "Heat conduction in one-dimensional non- integrable systems", Phys. Rev. E 61, 3828 (2000); . B Li, H Zhao, B , B. Li, H. Zhao, and B. Can disorder induce a finite thermal conductivity in 1D lattices?. Hu, Phys. Rev. Lett. 8669402Hu, "Can disorder induce a finite thermal conductivity in 1D lattices?", Phys. Rev. Lett. 86, 63 (2001); 87, 069402 (2001); Finite thermal conductivity in 1D models having zero Lyapunov exponents. B Li, L Wang, B Hu, Phys. Rev. Lett. 88223901B. Li, L. Wang and B. Hu, "Finite thermal conductivity in 1D models having zero Lyapunov exponents", Phys. Rev. Lett. 88, 223901 (2002). Thermal conductivity in classical lowdimensional lattices. S Lepri, R Livi, A Politi, Phys. Rep. 3771S. Lepri, R. Livi, and A. Politi, "Thermal conductivity in classical low- dimensional lattices", Phys. Rep. 377, 1 (2003). Heat transport in low-dimensional systems. A Dhar, Adv. Phys. 57457A. Dhar, "Heat transport in low-dimensional systems", Adv. Phys. 57, 457 (2008). Phononics: manipulating heat flow with electronic analogs and beyond. N Li, J Ren, L Wang, G Zhang, P Hanggi, B Li, Rev. Mod. Phys. 841045N. Li, J. Ren, L. Wang, G. Zhang, P. Hanggi and B. Li, "Phononics: manipulating heat flow with electronic analogs and beyond", Rev. Mod. Phys. 84, 1045 (2012). Anomalous heat diffusion. S Liu, P Hanggi, N Li, J Ren, B Li, Phys. Rev. Lett. 11240601S. Liu, P. Hanggi, N. Li, J. Ren and B. Li, "Anomalous heat diffusion", Phys. Rev. Lett. 112, 040601 (2014). Equilibration problem for the generalized Langevin equation. A Dhar, K Wagh, EuroPhys. Lett. 7960003A. Dhar and K. Wagh, "Equilibration problem for the generalized Langevin equation", EuroPhys. Lett., 79 60003 (2007). Strange heat flux in (an)harmonic networks. J.-P Eckmann, E Zabey, arXiv:nlin/0305006J.-P. Eckmann and E. Zabey, "Strange heat flux in (an)harmonic net- works", arXiv:nlin/0305006. Dynamical ensembles in nonequilibrium statistical mechanics. G Gallavotti, E G D Cohen, Phys. Rev. Lett. 742694G. Gallavotti and E. G. D. Cohen, "Dynamical ensembles in nonequilib- rium statistical mechanics", Phys. Rev. Lett. 74, 2694 (1995). Probability of second law violations in shearing steady states. D J Evans, E G D Cohen, G P Morriss, Phys. Rev. Lett. 712401D. J. Evans, E. G. D. Cohen, and G. P. Morriss, "Probability of second law violations in shearing steady states", Phys. Rev. Lett. 71, 2401 (1993); Equilibrium microstates which generate second law violating steady states. D J Evans, D J Searles, Phys. Rev. E. 501645D. J. Evans and D. J. Searles, "Equilibrium microstates which generate second law violating steady states", Phys. Rev. E 50, 1645 (1994); The fluctuation theorem. D J Evans, D J Searles, Adv. Phys. 511529D. J. Evans and D. J. Searles, "The fluctuation theorem", Adv. Phys. 51, 1529 (2002). Fluctuation theorem for stochastic dynamics. J Kurchan, J. Phys. A. 313719J. Kurchan, "Fluctuation theorem for stochastic dynamics", J. Phys. A 31, 3719 (1998) A Gallavotti-Cohen-Type symmetry in the large deviation functional for stochastic dynamics. J L Lebowitz, H Spohn, J. Stat. Phys. 95333J. L. Lebowitz and H. Spohn, "A Gallavotti-Cohen-Type symmetry in the large deviation functional for stochastic dynamics", J. Stat. Phys. 95, 333 (1999). Stochastic thermodynamics, fluctuation theorems and molecular machines. U Seifert, Rep. Prog. Phys. 75126001U. Seifert, "Stochastic thermodynamics, fluctuation theorems and molec- ular machines", Rep. Prog. Phys. 75, 126001, (2012). Nonequilibrium equality for free energy differences. C Jarzynski, Phys. Rev. Lett. 782690C. Jarzynski, "Nonequilibrium equality for free energy differences", Phys. Rev. Lett., 78, 2690 (1997); Equilibrium free-energy differences from nonequilibrium measurements: a master-equation approach. C Jarzynski, Phys. Rev. E. 565018C. Jarzynski, "Equilibrium free-energy differ- ences from nonequilibrium measurements: a master-equation approach", Phys. Rev. E 56, 5018 (1997). Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. G E Crooks, Phys. Rev. E. 602721G. E. Crooks, "Entropy production fluctuation theorem and the nonequi- librium work relation for free energy differences", Phys. Rev. E 60, 2721 (1999). Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems. M Esposito, U Harbola, S Mukamel, Rev. Mod. Phys. 811665M. Esposito, U. Harbola and S. Mukamel, "Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems", Rev. Mod. Phys. 81, 1665 (2009); Quantum fluctuation relations: foundations and applications. M Campisi, P Hänggi, P Talkner, Rev. Mod. Phys. 83771M. Campisi, P. Hänggi, and P. Talkner, "Quantum fluctuation relations: foundations and applications", Rev. Mod. Phys. 83, 771 (2011). Equalities and inequalities: irreversibility and the second law of thermodynamics at the nanoscale. C Jarzynski, Ann. Rev. Cond. Mat. Phys. 2329C. Jarzynski, "Equalities and inequalities: irreversibility and the second law of thermodynamics at the nanoscale", Ann. Rev. Cond. Mat. Phys., 2, 329 (2011). Quantum and classical fluctuation theorems from a decoherent-histories open-system analysis. Yigit Subasi, B L Hu, Phys. Rev. E. 8511112Yigit Subasi and B. L. Hu, "Quantum and classical fluctuation theo- rems from a decoherent-histories open-system analysis", Phys. Rev. E 85, 011112 (2012) Entanglement in manybody systems. L Amico, R Fazio, A Osterloh, V Vedral, Rev. Mod. Phys. 80517L. Amico, R. Fazio, A. Osterloh and V. Vedral, "Entanglement in many- body systems", Rev. Mod. Phys. 80, 517 (2008). Heat capacity as an indicator of entanglement. M Wieśniak, V Vedral, Caslav Brukner, Phys. Rev. B. 7864108M. Wieśniak, V. Vedral and Caslav Brukner, "Heat capacity as an indi- cator of entanglement", Phys. Rev. B 78, 064108 (2008). Entanglement properties of the harmonic chain. K Audenaert, J Eisert, M B Plenio, R F Werner, Phys. Rev. A. 6642327K. Audenaert, J. Eisert, M. B. Plenio, and R. F. Werner, "Entanglement properties of the harmonic chain", Phys. Rev. A 66, 042327 (2002). Thermal state entanglement in harmonic lattices. J Anders, Phys. Rev. A. 7762102J. Anders, "Thermal state entanglement in harmonic lattices", Phys. Rev. A 77, 062102 (2008). Entanglement and separability of quantum harmonic oscillator systems at finite temperature. J Anders, A Winter, Quantum Inf. Comput. 8245J. Anders, and A. Winter, "Entanglement and separability of quantum harmonic oscillator systems at finite temperature", Quantum Inf. Comput. 8, 0245 (2008). Entanglement of a two-particle Gaussian state interacting with a heat bath. A Ghesquiere, T Dorlas, Phys. Lett. A. 3772831A. Ghesquiere and T. Dorlas, "Entanglement of a two-particle Gaussian state interacting with a heat bath", Phys. Lett. A 377, 2831 (2013). Quantum physics: hot entanglement. V Vedral, Nature. 468769V. Vedral, "Quantum physics: hot entanglement", Nature 468, 769 (2010). Dynamics and nonequilibrium steady state in a system of coupled harmonic oscillators. A Ghesquiere, I Sinayskiy, F Petruccione, Phys. Lett. A. 3771682A. Ghesquiere, I. Sinayskiy and F. Petruccione, "Dynamics and non- equilibrium steady state in a system of coupled harmonic oscillators", Phys. Lett. A 377, 1682 (2013). Bringing entanglement to the high temperature limit. F Galve, L A Pachón, D Zueco, Phys. Rev. Lett. 105180501F. Galve, L.A. Pachón and D. Zueco, "Bringing entanglement to the high temperature limit", Phys. Rev. Lett. 105, 180501 (2010). Entanglement in stationary nonequilibrium states at high energies. M Znidaric, Phys. Rev. A. 8512324M. Znidaric, "Entanglement in stationary nonequilibrium states at high energies", Phys. Rev. A 85, 012324 (2012). Nonequilibrium density-matrix description of steady-state quantum transport. A Dhar, K Saito, P Hanggi, Phys. Rev. E. 8511126A. Dhar, K. Saito and P. Hanggi, "Nonequilibrium density-matrix de- scription of steady-state quantum transport", Phys. Rev. E 85, 011126 (2012). Dissipative quantum dynamics in a boson bath. Y.-C Chen, J L Lebowitz, C Liverani, Phys. Rev. B. 404664Y.-C. Chen, J. L. Lebowitz and C. Liverani, "Dissipative quantum dy- namics in a boson bath", Phys. Rev. B 40, 4664 (1989). Path integral of the Holstein model with a φ 4 on-site potential. M Zoli, Phys. Rev. B. 72214302M. Zoli, "Path integral of the Holstein model with a φ 4 on-site potential", Phys. Rev. B 72, 214302 (2005). Symmetries of generating functionals of Langevin processes with colored multiplicative noise. C Aron, G Biroli, L F Cugliandolo, J. Stat. Mech. 11018C. Aron, G. Biroli and L. F. Cugliandolo, "Symmetries of generating functionals of Langevin processes with colored multiplicative noise", J. Stat. Mech. P11018 (2010). Brownian motion of a quantum oscillator. J Schwinger, J. Math. Phys. 2407J. Schwinger, "Brownian motion of a quantum oscillator", J. Math. Phys. 2, 407 (1961); Diagram technique for nonequilibrium processes. L V Keldysh, Zh. Eksp. Teor. Fiz. 471018Sov. Phys. JETPL. V. Keldysh, "Diagram technique for nonequilibrium processes", Zh. Eksp. Teor. Fiz. 47, 1515 (1964) (Sov. Phys. JETP 20, 1018 (1965)); Equilibrium and nonequilibrium formalisms made unified. K C Chou, Z B Su, B L Hao, L Yu, Phys. Rep. 1181K. C. Chou, Z. B. Su, B. L. Hao and L. Yu, "Equilibrium and nonequilibrium formalisms made unified", Phys. Rep. 118, 1 (1985). Nonequilibrium quantum fields: closed-timepath effective action, Wigner function, and Boltzmann equation. E Calzetta, B L Hu, Phys. Rev. D. 372878E. Calzetta and B. L. Hu, "Nonequilibrium quantum fields: closed-time- path effective action, Wigner function, and Boltzmann equation", Phys. Rev. D 37, 2878 (1988). Introduction to nonequilibrium quantum field theory. J Berges, arXiv:hep-ph/0409233AIP Conf. Proc. 739J. Berges, "Introduction to nonequilibrium quantum field theory", AIP Conf. Proc. 739, 3 (2005), arXiv:hep-ph/0409233. Transport coefficients from the 2PI effective action: weak coupling and large N analysis. G Aarts, J M M Resco, J. Phys. Conf. Ser. 35414G. Aarts and J. M. M. Resco, "Transport coefficients from the 2PI effective action: weak coupling and large N analysis", J. Phys. Conf. Ser. 35, 414 (2006). C Bodet, M Kronenwett, B Nowak, D Sexty, T Gasenzer, arXiv:1101.0397Non-equilibrium quantum many-body dynamics: functional integral approaches. C. Bodet, M. Kronenwett, B. Nowak, D. Sexty and T. Gasenzer, "Non-equilibrium quantum many-body dynamics: functional integral ap- proaches" arXiv:1101.0397. Nonequilibrium dynamics of optical-lattice-loaded Bose-Einstein-condensate atoms: beyond the Hartree-Fock-Bogoliubov approximation. A M Rey, B L Hu, E Calzetta, A Roura, C Clark, Phys. Rev. A. 6933610A. M. Rey, B. L. Hu, E. Calzetta, A. Roura and C. Clark, "Nonequilib- rium dynamics of optical-lattice-loaded Bose-Einstein-condensate atoms: beyond the Hartree-Fock-Bogoliubov approximation", Phys. Rev. A. 69, 033610 (2004). General non-Markovian dynamics of open quantum systems. W.-M Zhang, P.-Y Lo, H.-N Xiong, M W.-Y Tu, F Nori, Phys. Rev. Lett. 109170402W.-M. Zhang, P.-Y. Lo, H.-N. Xiong, M. W.-Y. Tu and F. Nori, "General non-Markovian dynamics of open quantum systems", Phys. Rev. Lett. 109, 170402 (2012). (1999) and developments of relativistic heavy ion physics in the. D Bödeker, Nucl. Phys. B. 559502From hard thermal loops to Langevin dynamicsD. Bödeker, "From hard thermal loops to Langevin dynamics", Nucl. Phys. B 559, 502 (1999) and developments of relativistic heavy ion physics in the 2000s. Stochastic gravity: theory and applications. B L Hu, E Verdaguer, arXiv:0802.0658Living Reviews in Relativity. 11and references thereinB. L. Hu and E. Verdaguer, "Stochastic gravity: theory and applications", Living Reviews in Relativity 11, 3 (2008), arXiv:0802.0658 and references therein. E Calzetta, B L Hu, Nonequilibrium Quantum Field Theory. CambridgeCambridge University PressE. Calzetta and B. L. Hu, Nonequilibrium Quantum Field Theory (Cam- bridge University Press, Cambridge, 2008). The theory of a general quantum system interacting with a linear dissipative system. R P Feynman, F L Vernon, Ann. Phys. (N.Y.). 24118R. P. Feynman and F. L. Vernon, "The theory of a general quantum system interacting with a linear dissipative system ", Ann. Phys. (N.Y.) 24, 118 (1963). Path integral approach to quantum Brownian motion. A O Caldeira, A J Leggett, Physica A. 121587A. O. Caldeira and A. J. Leggett, "Path integral approach to quantum Brownian motion", Physica A 121, 587 (1983). Quantum Brownian motion, the functional integral approach. H Grabert, P Schramm, G L Ingold, Phys. Rep. 168115H. Grabert, P. Schramm and G. L. Ingold, "Quantum Brownian motion, the functional integral approach", Phys. Rep. 168, 115 (1988). Quantum Brownian motion in a general environment: exact master equation with nonlocal dissipation and colored noise. B L Hu, J P Paz, Y Zhang, Phys. Rev. D. 452843B. L. Hu, J. P. Paz and Y. Zhang, "Quantum Brownian motion in a general environment: exact master equation with nonlocal dissipation and colored noise", Phys. Rev. D 45, 2843 (1992). B L Hu, Lectures at the Seventh International Latin-American Symposium on General Relativity (SILARG VII). Proceeding appeared as Relativity and Gravitation: Classical and Quantum. J. D'Olivio et alSingaporeWorld ScientificB. L. Hu, Lectures at the Seventh International Latin-American Sympo- sium on General Relativity (SILARG VII). Proceeding appeared as Rel- ativity and Gravitation: Classical and Quantum, edited by J. D'Olivio et al (World Scientific, Singapore, 1991); . Yuhong Zhang, University of MarylandPh.D thesisYuhong Zhang, Ph.D thesis (Uni- versity of Maryland, 1990); Coarse-grained effective action and renormalization group theory in semiclassical gravity and cosmology. E Calzetta, B L Hu, F D Mazzitelli, Phys. Rep. 352459E. Calzetta, B. L. Hu, and F. D. Mazzitelli, "Coarse-grained effective action and renormalization group theory in semi- classical gravity and cosmology", Phys. Rep. 352, 459 (2001). Stochastic theory of relativistic particles moving in a quantum field: scalar Abraham-Lorentz-Dirac-Langevin equation, radiation reaction, and vacuum fluctuations. P R Johnson, B L Hu, Phys. Rev. D. 6565015P. R. Johnson and B. L. Hu, "Stochastic theory of relativistic parti- cles moving in a quantum field: scalar Abraham-Lorentz-Dirac-Langevin equation, radiation reaction, and vacuum fluctuations", Phys. Rev. D 65, 065015 (2002). Self-force with a stochastic component from radiation reaction of a scalar charge moving in curved spacetime. C R Galley, B L Hu, Phys. Rev. D. 7284023C. R. Galley and B. L. Hu, "Self-force with a stochastic component from radiation reaction of a scalar charge moving in curved spacetime", Phys. Rev. D 72, 084023 (2005). Stochastic description for open quantum systems. E Calzetta, A Roura, E Verdaguer, Physica A. 319188E. Calzetta, A. Roura and E. Verdaguer, "Stochastic description for open quantum systems", Physica A 319, 188 (2003). Reduction of a wave packet in quantum Brownian motion. W G Unruh, W H Zurek, Phys. Rev. D. 401071W. G. Unruh and W. H. Zurek, "Reduction of a wave packet in quantum Brownian motion", Phys. Rev. D 40, 1071 (1989). B L Hu, Quantum statistical field theory in gravitation and cosmology" Invited Lectures at the Third International Workshop on Thermal Fields and Applications. Banff, CanadaB. L. Hu, "Quantum statistical field theory in gravitation and cosmology" Invited Lectures at the Third International Workshop on Thermal Fields and Applications, Banff, Canada, Aug. 1993. Proceedings edited by R. . G Kobe, Kunstatter, arXiv:gr-qc/9403061World ScientificKobe, G. Kunstatter (World Scientific, 1994), arXiv:gr-qc/9403061. Quantum Brownian motion in a general environment. II. nonlinear coupling and perturbative approach. B L Hu, J P Paz, Y Zhang, Phys. Rev. D. 471576B. L. Hu, J. P. Paz and Y. Zhang, "Quantum Brownian motion in a general environment. II. nonlinear coupling and perturbative approach", Phys. Rev. D 47, 1576 (1993). Stochastic theory of accelerated detectors in quantum fields. A Raval, B L Hu, J Anglin, Phys. Rev. D. 537003A. Raval, B. L. Hu and J. Anglin, "Stochastic theory of accelerated de- tectors in quantum fields", Phys. Rev. D 53, 7003 (1996). Reduced density matrix for nonequilibrium steady states: a modified Redfield solution approach. J Yhingna, J.-S Wang, P Hanggi, Phys. Rev. E. 8852127J. Yhingna, J.-S. Wang and P. Hanggi, "Reduced density matrix for nonequilibrium steady states: a modified Redfield solution approach", Phys. Rev. E 88, 052127 (2013). Nonequilibrium Green's function method for quantum thermal transport. J.-S Wang, B K Agarwalla, H Li, J Thingna, ; H Li, B K Agarwalla, B Li, J.-S Wang, arXiv:1303.7317Frontiers of Physics. 86500Eur. Phys. J. BJ.-S. Wang, B. K. Agarwalla, H. Li, and J. Thingna, "Nonequilibrium Green's function method for quantum thermal transport", Frontiers of Physics, May 2013, arXiv:1303.7317; H. Li, B. K. Agarwalla, B. Li, and J.-S. Wang, "Cumulants of heat transfer in nonlinear quantum systems." Eur. Phys. J. B 86, 500 (2013). Landauer formula for phonon heat conduction: relation between energy transmittance and transmission coefficient. S G Das, A Dhar, Eur. Phys. J. B. 85372S. G. Das and A. Dhar, "Landauer formula for phonon heat conduction: relation between energy transmittance and transmission coefficient", Eur. Phys. J. B 85, 372 (2012). Nonequilibrium distribution functions for quantum transport: universality and approximation for the steady state regime. H Ness, Phys. Rev. B. 8945409H. Ness, "Nonequilibrium distribution functions for quantum transport: universality and approximation for the steady state regime ", Phys. Rev. B 89, 045409 (2014). Alternative derivation of the Hu-Paz-Zhang master equation of quantum Brownian motion. J J Halliwell, T Yu, Phys. Rev. D. 532012J.J. Halliwell and T. Yu, "Alternative derivation of the Hu-Paz-Zhang master equation of quantum Brownian motion", Phys. Rev. D 53, 2012 (1996). Nonequilibrium energy transport in nonlinear open quantum systems: a functional perturbative analysis. J.-T Hsiang, B L Hu, Paper II, in preparationJ.-T. Hsiang and B. L. Hu, "Nonequilibrium energy transport in nonlinear open quantum systems: a functional perturbative analysis", (Paper II, in preparation). Nonequilibrium steady state: quantum entanglement at high temperatures?. J.-T Hsiang, B L Hu, in preparationJ.-T. Hsiang and B. L. Hu, "Nonequilibrium steady state: quantum en- tanglement at high temperatures?", (in preparation). Equilibration in a weakly nonlinear quantum open system", (in preparation for. Y Subasi, C H Fleming, J.-T Hsiang, B L Hu, Phys. Rev. E). Y. Subasi, C. H. Fleming, J.-T. Hsiang and B. L. Hu, "Equilibration in a weakly nonlinear quantum open system", (in preparation for Phys. Rev. E). Entanglement of Gaussian states. G Adesso, arXiv:quant-ph/0702069University of SalernoPh.D thesisG. Adesso, "Entanglement of Gaussian states", Ph.D thesis (University of Salerno, 2006), arXiv:quant-ph/0702069. Temporal and spatial dependence of quantum entanglement from field theory perspective. S Y Lin, B L Hu, Phys. Rev. D. 7985020S. Y. Lin and B. L. Hu, "Temporal and spatial dependence of quantum entanglement from field theory perspective", Phys. Rev. D 79, 085020 (2009) Entanglement between oscillators in relativistic motion and a quantum field. B L Hu, S Y Lin, J Louko, Class. Quant. Grav. 29224005B. L. Hu, S. Y. Lin and J. Louko, "Entanglement between oscillators in relativistic motion and a quantum field", Class. Quant. Grav. 29, (2012) 224005. Decoherence and initial correlations in quantum Brownian motion. L D Romero, J P Paz, Phys. Rev. A. 554070L. D. Romero and J. P. Paz, "Decoherence and initial correlations in quantum Brownian motion", Phys. Rev. A, 55, 4070 (1997). Initial state preparation with dynamically generated system-environment correlations. C H Fleming, A Roura, B L Hu, Phys. Rev. E. 8421106C. H. Fleming, A. Roura and B. L. Hu, "Initial state preparation with dynamically generated system-environment correlations", Phys. Rev. E 84, 021106 (2011) J.-T Hsiang, R Zhou, B L Hu, arXiv:1306.3728Entanglement structure of an open system of N quantum oscillators: II. strong disparate couplings N = 3. J.-T. Hsiang, R. Zhou and B. L. Hu, "Entanglement structure of an open system of N quantum oscillators: II. strong disparate couplings N = 3", arXiv:1306.3728.
[]
[ "Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems", "Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems" ]
[ "Dan Garber \nInstitute of Technology\nTechnion -Israel Institute of Technology\nIsrael\n", "Atara Kaplan \nInstitute of Technology\nTechnion -Israel Institute of Technology\nIsrael\n" ]
[ "Institute of Technology\nTechnion -Israel Institute of Technology\nIsrael", "Institute of Technology\nTechnion -Israel Institute of Technology\nIsrael" ]
[]
Composite convex optimization problems which include both a nonsmooth term and a low-rank promoting term have important applications in machine learning and signal processing, such as when one wishes to recover an unknown matrix that is simultaneously low-rank and sparse. However, such problems are highly challenging to solve in large-scale: the low-rank promoting term prohibits efficient implementations of proximal methods for composite optimization and even simple subgradient methods. On the other hand, methods which are tailored for low-rank optimization, such as conditional gradient-type methods, which are often applied to a smooth approximation of the nonsmooth objective, are slow since their runtime scales with both the large Lipshitz parameter of the smoothed gradient vector and with 1/ǫ.In this paper we develop efficient algorithms for stochastic optimization of a strongly-convex objective which includes both a nonsmooth term and a low-rank promoting term. In particular, to the best of our knowledge, we present the first algorithm that enjoys all following critical properties for large-scale problems: i) (nearly) optimal sample complexity, ii) each iteration requires only a single low-rank SVD computation, and iii) overall number of thin-SVD computations scales only with log 1/ǫ (as opposed to poly(1/ǫ) in previous methods). We also give an algorithm for the closely-related finite-sum setting. At the heart of our results lie a novel combination of a variance-reduction technique and the use of a weak-proximal oracle which is key to obtaining all above three properties simultaneously. We empirically demonstrate our results on the problem of recovering a simultaneously low-rank and sparse matrix. Finally, while our main motivation comes from low-rank matrix optimization problems, our results apply in a much wider setting, namely when a weak proximal oracle can be implemented much more efficiently than the standard exact proximal oracle.
null
[ "https://arxiv.org/pdf/1809.10477v1.pdf" ]
52,876,605
1809.10477
18cb06de51cb4bfcc2e3e5045319f5928fc62c44
Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems 27 Sep 2018 Dan Garber Institute of Technology Technion -Israel Institute of Technology Israel Atara Kaplan Institute of Technology Technion -Israel Institute of Technology Israel Fast Stochastic Algorithms for Low-rank and Nonsmooth Matrix Problems 27 Sep 2018 Composite convex optimization problems which include both a nonsmooth term and a low-rank promoting term have important applications in machine learning and signal processing, such as when one wishes to recover an unknown matrix that is simultaneously low-rank and sparse. However, such problems are highly challenging to solve in large-scale: the low-rank promoting term prohibits efficient implementations of proximal methods for composite optimization and even simple subgradient methods. On the other hand, methods which are tailored for low-rank optimization, such as conditional gradient-type methods, which are often applied to a smooth approximation of the nonsmooth objective, are slow since their runtime scales with both the large Lipshitz parameter of the smoothed gradient vector and with 1/ǫ.In this paper we develop efficient algorithms for stochastic optimization of a strongly-convex objective which includes both a nonsmooth term and a low-rank promoting term. In particular, to the best of our knowledge, we present the first algorithm that enjoys all following critical properties for large-scale problems: i) (nearly) optimal sample complexity, ii) each iteration requires only a single low-rank SVD computation, and iii) overall number of thin-SVD computations scales only with log 1/ǫ (as opposed to poly(1/ǫ) in previous methods). We also give an algorithm for the closely-related finite-sum setting. At the heart of our results lie a novel combination of a variance-reduction technique and the use of a weak-proximal oracle which is key to obtaining all above three properties simultaneously. We empirically demonstrate our results on the problem of recovering a simultaneously low-rank and sparse matrix. Finally, while our main motivation comes from low-rank matrix optimization problems, our results apply in a much wider setting, namely when a weak proximal oracle can be implemented much more efficiently than the standard exact proximal oracle. Introduction Our paper is strongly motivated by low-rank and non-smooth matrix optimization problems which are quite common in machine learning and signal processing applications. These include tasks such as low-rank and sparse covariance matrix estimation, graph denoising and link prediction [17], analysis of social networks [19], and subspace clustering [18], to name a few. Such optimization problems often fit the following very general optimization model: min X∈V f (X) := G(X) + R NS (X) + h(X),(1) where V is a finite linear space over the reals, G(·) is convex and smooth, R NS (·) is convex and (generally) nonsmooth, and h(·) is convex and proximal-friendly (e.g., it is an indicator function for a convex set or a convex regularizer). Motivated by large-scale machine learning settings, we further assume G(·) is stochastic, i.e., G(X) = E g∼D [g(X)], where D is a distribution over convex and smooth functions, and either given by a sampling oracle (stochastic setting), or admits a finite support and given explicitly (finite-sum setting). Finally, we assume f (·) is strongly-convex (either due to strong convexity of G(·) or R NS (·)). For instance the simultaneously low-rank and sparse covariance estimation problem [17] can be written as: min tr(X)≤τ, X 0 1 2 X − M 2 F + λ X 1 ,(2) where M = YY ⊤ +N is a noisy observation of some low-rank and sparse covariance matrix YY ⊤ . Here, V = S n (space of n×n real symmetric matrices), G(X) = 1 2 X − M 2 F (which is deterministic in this simple example), R NS (X) = λ X 1 , and h(X) is an indicator function for the trace-bounded positive semidefinite cone (which both constraints the solution to be positive semidefinite and promotes low-rank). A closely related problem to (2) to which all of the following discussions apply, is when X ∈ V = R m×n is not constrained to be positive semidefinite (or even symmetric), and a low-rank solution is encouraged by constraining X via a nuclear norm constraint X * ≤ τ , where · * is the ℓ 1 norm applied to the vector of singular values. In Table 1 we provide a very simple numerical demonstration of the applicability of Problem (2) to low-rank and sparse estimation, which exhibits the importance of combining both low-rank and entry-wise sparsity promoting terms (as opposed to methods that only promote low-rank). Table 1: Numerical example (showing the signal-recovery error, relative sparsity and rank of solution X * ) for estimating a sparse rank-one matrix YY ⊤ = yy ⊤ from the noisy observation M = yy ⊤ + c 2 (N + N ⊤ ), where N ∼ N (0, I n ). Each entry y i is zero w.p. 0.9 and U {1, . . . , 10} w.p. 0.1. The dimension is n = 50. Results are averages of 50 i.i.d. experiments, and c ∈ {0.5, 5} (magnitude of noise). For method (2) λ is chosen via experimentation. The general model (1) is known to be a very difficult optimization problem to solve in large scale, already in the specific setting of Problem (2). In particular, many of the traditional first-order convex optimization methods used for solving non-smooth optimization problems are not efficiently applicable to it. For instance, proximal methods for composite optimization, such as the celebrated FISTA algorithm [1], which in many cases are very efficient, do not admit efficient implementations for composite problems which include both a non-smooth term and a low-rank promoting term. When applied to Problem (2), each iteration of FISTA will require to solve a problem of the same form as the original problem, and hence is inefficient. Another type of well known first-order methods that are applicable to nonsmooth problems are deterministic/stochastic subgradient/mirrordescent methods [15,3]. However these methods are also inefficient for problems such as (2), since each iteration requires projecting a point onto the feasible set, which for problems such as (2), requires a full-rank SVD computation on each iteration, which is computationally-prohibitive for large-scale problems. Another type of methods, which are often suitable for large-scale low-rank matrix optimization problems, and have been studied extensively in this context in recent years, are Conditional Gradient-type methods (aka Frank Wolfe-type methods), see for instance [8,5,7,6,16,13,10,9,14]. These type of algorithms, when applied to optimization over a nuclear-norm ball or over the trace-bounded positive semidefinite cone (as in Problem (2)), avoid expensive full-rank SVD computations, and only compute a single leading singular vector pair on each iteration (i.e., rank-one SVD), and hence are much more scalable. However, Conditional Gradient methods can usually be applied only to smooth problems, and so, the non-smooth term R NS (X) is often replaced with a smooth approximation R(X). A general theory and framework for generating such smooth approximation (i.e., replacing the non-smooth term with a smooth function that is point-wise close to the original), often referred to as smoothing, is described in [2]. Unfortunately, smoothing a function often results in a large Lipschitz constant of the gradient vector of the smoothed function. For example, the smooth approximation of the ℓ 1 norm is via the well known Huber function for which the Lipschitz constant of the gradient often scales like dim(V)/ε, where ε is target accuracy to which the problem needs to be solved. Since the convergence rate of smooth optimization algorithms such as conditional gradient-type methods discussed above often scales with βD 2 /ǫ, where β is the Lipschitz parameter of the gradient and D is the distance of the initial point to an optimal solution, these methods are often not scalable for nonsmooth objectives such as Problem (2) and the general model (1) (even after smoothing them), since typically all three parameters 1/ε, D, β can be quite large. In particular, we note that for strongly-convex functions, it is possible to obtain (via other types of first-order methods) rates that depend only logarithmically on 1/ǫ, D. Another issue with conditional gradient methods is that, as opposed to projected subgradient methods, their analysis does not naturally extend to handle stochastic objectives (recall that, motivated by machine learning settings, in the general model (1) we assume G(·) is stochastic). In particular, a straightforward variant of the method for stochastic objectives results in a highly suboptimal sample complexity [6]. In a recent related work [12], the authors consider a variant of the conditional gradient method for solving stochastic optimization problems that cleverly combines the conditional gradient method with Nesterov's accelerated method and stochastic sampling to obtain an algorithm for smooth stochastic convex optimization that, in the context of low-rank matrix optimization problems, i) requires only 1-SVD computation on each iteration (as in the standard conditional gradient method) and ii) enjoys (nearly) optimal sample complexity (both in the strongly convex case and non-strongly convex case). In a recent work [6], the technique of [12] was extended to the finite-sum stochastic setting and combined with a popular variance reduction technique [11], resulting in a conditional gradient-type method for smooth and strongly-convex finite-sum optimization that i) requires only 1-SVD computation on each iteration, and ii) enjoys a gradient-oracle complexity of the same flavor as usually obtained via variance-reduction methods [11], greatly improving over naive applications of conditional gradient methods which do not apply variance reduction. Unfortunately, both results [12,6], while greatly improving the first-order oracle complexity of previous conditional-gradient methods, still require an overall number of 1-SVD computations that scales like βD/ǫ. Hence, when applied to smooth approximations of nonsmooth problems such as Problems (2), (1), the overall very large number of thin-SVD computations needed greatly limits the applicability of these methods. The limitations of previous methods in tackling large-scale low-rank and nonsmooth matrix optimization problems naturally leads us to the following question. In the context of low-rank and nonsmooth matrix optimization, is it possible to combine all following three key properties for solving large-scale instances of Model (1) into a single algorithm? 1. (nearly) optimal sample complexity, 2. use of only low-rank SVD computations, 3. overall number of low-rank SVD computations scales with log(1/ε) (not poly(1/ǫ) as in previous methods). In this paper we answer this question in the affirmative. To better discuss our results we now fully formalize the considered model and assumptions. We consider the following general model: min X∈V f (X) := G(X) + R(X) + h(X),(3) where V is a finite linear space over the reals equipped with an inner product ·, · . Throughout the paper we let · denote the norm induced by the inner product. Throughout the paper we consider the following assumptions for model (3). Assumption 1. • G is stochastic, i.e., G(X) = E g∼D [g(X)], where D is a distribution over functions g : V → R, given by a sampling oracle. G is differentiable, and for all g ∈ supp(D), g is β G -smooth, and there exists σ ≥ 0 such that σ ≥ sup X∈dom(h) E[ ∇G(X) − ∇g(X) 2 ]. • R : V → (−∞, ∞] is deterministic, β R -smooth, and convex, • G + R is α-strongly convex, • h : V → (−∞, ∞] is deterministic, non-smooth, proper, lower semicontinuous and convex. For simplicity we define β := β G + β R . As discussed above, R(·) can be thought of as a smooth approximation of some nonsmooth term R NS (·) (hence, we generally expect that β R >> β G ), and h(·) can be thought of as either an indicator function for a convex set (e.g., a nuclear-norm ball) or a convex regularizer. A quick summary of our results and comparison to previous conditional gradient-type methods for solving Model (3) in case h(·) is either an indicator for a nuclear norm ball of radius τ or the set of all positive semidefinite matrices with trace at most τ , is given in Table 2. Our algorithm and novel complexity bounds are based on a combination of the variance reduction technique introduced in [11] and the use of, what we refer to in this work as, a weak-proximal oracle (as opposed to the standard exact proximal oracle used ubiquitously in first-order methods), which was introduced in the context of nuclear-norm-constrained optimization in [7], and further generalized in [16]. In the context of low-rank matrix optimization problems, implementation of this weak-proximal oracle requires a SVD computation of rank at most rank(X * ) -the rank of the optimal solution X * , as opposed to an exact proximal oracle that requires in general a full-rank SVD computation. Since for such problems we expect that rank(X * ) is much smaller than the dimension, and since the runtime of low-rank SVD computations (when carried out via fast iterative methods such as variants of the subspace iteration method or Lanczos-type algorithms) scales nicely with both the target rank and sparsity of gradients 1 , for such problems the weakproximal oracle admits a much more efficient implementation than the standard proximal oracle. While both of these algorithmic ingredients are previously known and studied, it is their particular combination that, quite surprisingly, proves to be key to obtaining all three complexity bounds listed in our proposed question, simultaneously. In particular, it is important to note that while the use of a weak proximal oracle, as we define precisely in the sequel, suffices to obtain an algorithm that uses overall only O(log(1/ǫ) low-rank SVD computations (currently treating for simplicity all other parameters as constants), to the best of our knowledge it does not suffice in order to also obtain (nearly) optimal sample complexity. The reason, at a high-level (see a more detailed discussion in the sequel), is that the weak-proximal oracle is strong enough to guarantee decrease of the loss function on each iteration (in expectation), but does not give a stronger type of guarantee, which holds for the exact proximal oracle, that is crucial for obtaining optimal sample complexity with algorithms such as Stochastic Gradient Descent [3] and the conditional gradient-type method of [12] (that indeed rely on exact, or nearly exact, proximal computations). It turns out that the use of a variance reduction technique (such as [11]) is key to bypassing this obstacle and obtaining also (near) optimal sample complexity, on top of the low SVD complexity. We also give a variant of our algorithm to the finite-sum setting that obtains similar improvements. Finally, while our main motivation comes from low-rank and nomsmooth matrix optimization problems, it is important to note that as captured in our general Model (3), our results are applicable in a much wider setting than that of low-rank matrix optimization problems. Our method is suitable especially for stochastic nonsmooth convex problems for which implementing a weak proximal oracle is much more efficient than an exact proximal oracle. σ 2 βτ 4 ε 3 1 βτ 2 ε CGS [12] 0 σ 2 αε 1 βτ 2 ε This work (Alg. 1) 0 σ 2 αε rank(X * ) β α ln 1 ε ↓ Finite Sum ↓ STORC [6] ln 1 ε β α 2 ln 1 ε 1 βτ 2 ε This work finite sum (Alg. 2) ln 1 ε β 2 G β α 3 ln 1 ε rank(X * ) β α ln 1 ε Organization of this paper The rest of this paper is organized as follows. In Section 2 we present our main algorithm and our two main results (informally). Importantly, we discuss in detail the importance of combining stochastic variance reduction with a weak proximal oracle to obtain our novel complexity bounds. In section 3 we describe our main results in full detail and prove them. In Section 4 we describe in detail applications of our results to non-smooth optimization problems including several concrete examples. Finally, in Section 5 we present preliminary empirical evidence supporting our theoretical results. Algorithm and Results Our algorithm for solving Model (3), Algorithm 1, is given below. We now briefly discuss the main two building blocks of the algorithm, namely a variance reduction technique and the use of a weak proximal oracle, and the importance of their combination in achieving the novel complexity bounds. The importance of combining weak proximal updates with variance reduction Our use of the variance reduction technique of [11] is quite straightforward as observable in Algorithm 1. Importantly, while [11] applied it to finite-sum optimization, here we apply it to the more general black-box stochastic setting, and hence the sample-size parameter k s used for the "snap-shot" gradient∇g(X s ) on epoch s grows from epoch to epoch. This modification of the technique is along the lines of [4]. The weak proximal oracle strategy is applied in our algorithm as follows. For a step-size η t , a composite optimization proximal algorithm, which treats the function h(·) in proximal fashion and the functions G, R via a gradient oracle, will compute on each iteration an update of the form V t ← arg min V∈V ψ t (V) := V − X s,t + 1 2βη t (∇g(X s,t ) + ∇R(X s,t )) 2 + 1 βη t h(V) . (4) For instance, if V = R m×n and h(·) is an indicator function for the nuclear-norm ball {X ∈ R m×n | X * ≤ τ }, then computing V t in Eq. (4) amounts to Euclidean projection of the matrix A t = X s,t − 1 2βηt (∇g(X s,t ) − ∇R(X s,t )) onto the nuclear-norm ball of radius τ . This projection is carried out by computing a full-rank SVD of A t and projecting the singular values onto the τ -scaled simplex. Since a full-rank SVD is required, this operation takes O(m 2 n) time (assuming m ≤ n), which is prohibitive for very large m, n. Our algorithm avoids the computational bottleneck of full-rank SVD computations by only requiring that V t satisfies the inequality: ψ t (V t ) ≤ ψ t (X * ),(5) where X * is the (unique) optimal solution to (3). We call a procedure for computing such updates -a weak proximal oracle. In the context discussed above, i.e., h(·) is an indicator for the radius-τ nuclear-norm ball, (5) can be satisfied simply by projecting the rank(X * )approximation of the matrix A t onto the nuclear-norm ball. This only requires to compute the top rank(X * ) components in the singular value decomposition of A t , and thus the runtime scales roughly like O(rank(X * ) · nnz(A t )) using fast Krylov Subspace methods (e.g., subspace iteration, Lanczos), which results in a much more efficient procedure (see further detailed discussions in [7,16]). Unfortunately, the use of weak proximal updates given by Eq. (5), as opposed to the standard update in Eq. (4), seems to come with a price. While the weak-proximal guarantee ψ t (V t ) ≤ ψ t (X * ) is sufficient to retain the convergence rates attainable via descent-type methods, i.e., methods that decrease the function value on each iteration (see for instance [7,16]), it does not seem strong enough to obtain the rates of nondescent-type methods such as Nesterov's acceleration-based methods [12], and stochastic (sub)gradient methods [3]. The analyses of these methods seem to crucially depend on the stronger inequality X * − V t 2 ≤ 2 α t (ψ t (X * ) − ψ t (V t )) ,(6) where α t is the strong-convexity parameter associated with the function ψ t (·). The inequality (6) is obtained only for an optimal minimizer of ψ t (·) (as given by (4)) and not by the weak proximal solution given by (5). It is for this reason that simply combining the use of the weak proximal update (5) with standard analysis of SGD [3] or the Stochastic Conditional Gradient Sliding method [12] will not result in optimal sample complexity 2 . Perhaps surprisingly, as our analysis shows, it is the combination of the weak proximal updates with the variance reduction technique that allows us to avoid the use of the strong inequality (6) and to obtain (nearly) optimal sample complexity using only the weak proximal update guarantee (5). Since in many settings of interest, especially in the context of matrix optimization problems, the computation of V t requires some numeric procedure which is prone to accuracy issues, or in cases in which X * is not low-rank but only very close to a low-rank matrix (in some norm), we introduce an error-tolerance parameter δ in the proximal computation step in Algorithm 1 which allows to absorb such errors that can be controlled (e.g., by properly tuning precision of the thin-SVD computation). Algorithm 1 Stochastic Variance-Reduced Generalized Conditional Gradient for Prob- lem (3) Input: T , {η t } T −1 t=1 ⊂ [0, 1], {k t } T −1 t=1 , {k s } s≥1 ⊂ N, δ ≥ 0. Initialization: Choose some X 1 ∈ dom(h) . for s = 1, 2, ... do Sample g (1) , ..., g (ks ) from D. Define∇g(X s ) = 1 ks ks i=1 ∇g (i) (X s ) {snap-shot gradient}. X s,1 = X s for t = 1, 2, ..., T − 1 do Sample g (1) , ..., g (kt) from D. Define∇g(X s,t ) = 1 kt kt i=1 ∇g (i) (X s,t ) − ∇g (i) (X s ) −∇g(X s ) . V t = arg min V∈V ψ t (V) := V − X s,t + 1 2βηt (∇g(X s,t ) + ∇R(X s,t )) 2 + 1 βηt h(V) {in fact it suffices that ψ t (V t ) ≤ ψ t (X * ) + δ for some optimal solution X * }. X s,t+1 = (1 − η t )X s,t + η t V t end for X s+1 = X s,T end for Outline of main results We now present a concise version of our main results, Theorems 1, 2. In section 3 we provide the complete analysis with all the details and proofs of these theorems. Subsequent results and concrete applications to non-smooth problems follow in Section 4. Theorem 1 (stochastic setting). Assume that Assumption 1 holds. There is an explicit choice for the parameters in Algorithm 1 for which the total number of epochs (iterations of the outer-loop) required in order to find an ε-approximated solution in expectation for Problem (3) is bounded by O ln 1 ε , the total number of calls to the weak proximal oracle is bounded by O β α ln 1 ε , and the total number of stochastic gradients sampled is bounded by O σ 2 αε + β 2 G β α 3 ln 1 ε . We note that under Assumption 1, the overall number of calls to a weak proximal oracle to reach ǫ-approximated solution matches the overall number of calls to an exact proximal oracle used by the proximal gradient method for smooth and strongly convex optimization. Also, under Assumption 1, the leading term in the bound on overall number of stochastic gradients is optimal (up to constants). We also present results for the related finite sum problem. Our algorithm for finite sum is very similar to Algorithm 1 and is brought in section 3.1. Theorem 2 (finite-sum setting). Assume that Assumption 1 holds and that D is an explicitly given uniform distribution over n functions. There exist an explicit choice for the parameters in Algorithm 2 for which the total number of epochs required in order to find an ε-approximated solution in expectation for Problem O n + β 2 G β α 3 ln 1 ε . We see that as is standard in variance-reduced methods for smooth and strongly convex optimization, the overall number of gradients decouples between terms that depend on the smoothness and strong convexity of the objective (e.g., the condition number β/α), and the overall number of functions n. Analysis In this section we prove Theorem 1, 2. The following lemma bounds the expected decrease in function value after a single iteration of the inner-loop in Algorithm 1. Lemma 1 (expected decrease). Assume that Assumption 1 holds. Fix some epoch s of Algorithm 1, and let {X s,t } T +1 t=1 , {V t } T t=1 be the iterates generated throughout the epoch, and suppose that ψ t (V t ) ≤ ψ t (X)+δ for some fixed feasible solutionX. Then, if 2βη t ≤ α, we have that E[f (X s,t+1 )] ≤ (1 − η t ) E[f (X s,t )] + η t f (X) + σ 2 s,t 2β + βη 2 t δ,(7) where σ s,t = E[ ∇G(X s,t ) −∇g(X s,t ) 2 ]. Proof. Denote φ := G + R to be the smooth part of f . φ is β-smooth and so by the well known Decent Lemma, φ(X s,t+1 ) ≤ φ(X s,t ) + X s,t+1 − X s,t , ∇φ(X s,t ) + β 2 X s,t+1 − X s,t 2 = φ(X s,t ) + X s,t+1 − X s,t ,∇g(X s,t ) + ∇R(X s,t ) + X s,t+1 − X s,t , ∇G(X s,t ) −∇g(X s,t ) + β 2 X s,t+1 − X s,t 2 . Plugging in X s, t+1 = (1 − η t )X s,t + η t V t , we get φ(X s,t+1 ) ≤ φ(X s,t ) + η t V t − X s,t ,∇g(X s,t ) + ∇R(X s,t ) + η t V t − X s,t , ∇G(X s,t ) −∇g(X s,t ) + βη 2 t 2 V t − X s,t 2 .(8) In addition, it holds that 0 ≤ 1 √ βη t (∇G(X s,t ) −∇g(X s,t )) − βη t (V t − X s,t ) 2 = 1 βη t ∇G(X s,t ) −∇g(X s,t ) 2 − 2 V t − X s,t , ∇G(X s,t ) −∇g(X s,t ) + βη t V t − X s,t 2 . Rearranging we get, V t − X s,t , ∇G(X s,t ) −∇g(X s,t ) ≤ 1 2βη t ∇G(X s,t ) −∇g(X s,t ) 2 + βη t 2 V t − X s,t 2 . Plugging this last inequality into (8), we get φ(X s,t+1 ) ≤ φ(X s,t ) + η t V t − X s,t ,∇g(X s,t ) + ∇R(X s,t ) + η t 1 2βη t ∇G(X s,t ) −∇g(X s,t ) 2 + βη t 2 V t − X s,t 2 + βη 2 t 2 V t − X s,t 2 = φ(X s,t ) + η t V t − X s,t ,∇g(X s,t ) + ∇R(X s,t ) + 1 2β ∇G(X s,t ) −∇g(X s,t ) 2 + βη 2 t V t − X s,t 2 . Using the convexity of h we have that h(X s,t+1 ) = h((1 − η t )X s,t + η t V t ) ≤ (1 − η t )h(X s,t ) + η t h(V t ). Combining the last two inequalities and recalling that f = φ + h we get f (X s,t+1 ) ≤ (1 − η t )f (X s,t ) + η t (φ(X s,t ) + h(V t )) + 1 2β ∇G(X s,t ) −∇g(X s,t ) 2 +η t V t − X s,t ,∇g(X s,t ) + ∇R(X s,t ) + βη 2 t V t − X s,t 2 = (1 − η t )f (X s,t ) + η t φ(X s,t ) + 1 2β ∇G(X s,t ) −∇g(X s,t ) 2 +βη 2 t ψ t (V t ) − 1 (2βη t ) 2 ∇ g(X s,t ) + ∇R(X s,t ) 2 . By the definition of V t and the assumption of the lemma we have f (X s,t+1 ) ≤ (1 − η t )f (X s,t ) + η t φ(X s,t ) + 1 2β ∇G(X s,t ) −∇g(X s,t ) 2 +βη 2 t ψ t (X) − 1 (2βη t ) 2 ∇ g(X s,t ) + ∇R(X s,t ) 2 + δ = (1 − η t )f (X s,t ) + η t (φ(X s,t ) + h(X)) + 1 2β ∇G(X s,t ) −∇g(X s,t ) 2 +η t X − X s,t ,∇g(X s,t ) + ∇R(X s,t ) + βη 2 t X − X s,t 2 + βη 2 t δ. Taking expectation with respect to the randomness in∇g(X s,t ), E t [f (X s,t+1 )] ≤ (1 − η t )f (X s,t ) + η t (φ(X s,t ) + h(X)) + σ 2 s,t 2β + η t X − X s,t , ∇G(X s,t ) + ∇R(X s,t ) + βη 2 t X − X s,t 2 + βη 2 t δ. Using the α-strong convexity of φ = G + R we get E t [f (X s,t+1 )] ≤ (1 − η t )f (X s,t ) + η t (φ(X s,t ) + h(X)) + σ 2 s,t 2β + η t φ(X) − φ(X s,t ) − α 2 X − X s,t 2 + βη 2 t X − X s,t 2 + βη 2 t δ = (1 − η t )f (X s,t ) + η t f (X) − αη t 2 X − X s,t 2 + βη 2 t X − X s,t 2 + σ 2 s,t 2β + βη 2 t δ. Using our assuming that 2βη t ≤ α we have that E t [f (X s,t+1 )] ≤ (1 − η t )f (X s,t ) + η t f (X) + σ 2 s,t 2β + βη 2 t δ. Taking expectation over both sides w.r.t all randomness, we get E[f (X s,t+1 )] ≤ (1 − η t ) E[f (X s,t )] + η t f (X) + σ 2 s,t 2β + βη 2 t δ,(9)E[f (X s,t+1 )] − f (X * ) ≤ (1 − η t ) (E[f (X s,t )] − f (X * )) + σ 2 s,t 2β + βη 2 t δ,(10) where σ s,t = E[ ∇G(X s,t ) −∇g(X s,t ) 2 ]. Proof. By choosingX = X * in (7) and subtracting f (X * ) from both sides, we get the desired result. The following lemma bounds the variance the gradient estimator used in any iteration of the inner-loop of Algorithm 1. Lemma 2 (variance bound). Assume that Assumption 1 holds. Fix some epoch s of Algorithm 1, and let {X s,t } T +1 t=1 be the iterates generated throughout the epoch. Then, σ 2 s,t = E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] ≤ 8β 2 G αk t (E[f (X s )] − f (X * )) + 8β 2 G αk t (E[f (X s,t )] − f (X * )) + 2σ 2 k s .(11) Proof. Fix some epoch s and iteration t of the inner loop. Since for all 1 ≤ i < j ≤ k t , ∇g (i) (X) and ∇g (j) (X) are i.i.d. random variables, and E i [∇g (i) (X)] = E j [∇g (j) (X)] = ∇G(X), E 1 k t kt i=1 ∇g (i) (X s ) − ∇g (i) (X s,t ) − ∇G(X s ) − ∇G(X s,t ) 2 = 1 k t E ∇g (1) (X s ) − ∇g (1) (X s,t ) − ∇G(X s ) − ∇G(X s,t ) 2 .(12) In the same way, E[ ∇G(X s ) −∇g(X s ) 2 ] = 1 k t E[ ∇G(X s ) − ∇g (1) (X s ) 2 ] ≤ σ 2 k s .(13) By the definition of∇g(X) we have that E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] = E ∇G(X s,t ) − ∇G(X s ) − 1 k t kt i=1∇ g (i) (X s,t ) + 1 k t kt i=1∇ g (i) (X s ) −∇g(X s ) + ∇G(X s ) 2 ≤ 2E 1 k t kt i=1 ∇ g (i) (X s ) −∇g (i) (X s,t ) − ∇G(X s ) − ∇G(X s,t ) 2 + 2E[ ∇G(X s ) −∇g(X s ) 2 ]. Using (12) and (13), we get E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] ≤ 2 k t E[ ∇g (1) (X s ) − ∇g (1) (X s,t ) − (∇G(X s ) − ∇G(X s,t )) 2 ] + 2σ 2 k s . For any random vector v, the variance is bounded by its second moment, i.e. E[ v − E[v] 2 ] ≤ E[ v 2 ]. In our case E[∇g (1) (X s ) − ∇g (1) (X s,t )] = ∇G(X s ) − ∇G(X s,t ). There- fore, E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] ≤ 2 k t E[ ∇g (1) (X s ) − ∇g (1) (X s,t ) 2 ] + 2σ 2 k s = 2 k t E[ ∇g (1) (X s ) − ∇g (1) (X * ) − ∇g (1) (X s,t ) + ∇g (1) (X * ) 2 ] + 2σ 2 k s ≤ 4 k t E[ ∇g (1) (X s ) − ∇g (1) (X * ) 2 ] + 4 k t E[ ∇g (1) (X s,t ) − ∇g (1) (X * ) 2 ] + 2σ 2 k s . Using the β G -smoothness of g (1) we have E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] ≤ 4β 2 G k t E[ X s − X * 2 ] + 4β 2 G k t E[ X s,t − X * 2 ] + 2σ 2 k s . Finally, using the α-strong convexity of f we obtain E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] ≤ 8β 2 G αk t (E[f (X s )] − f (X * )) + 8β 2 G αk t (E[f (X s,t )] − f (X * )) + 2σ 2 k s . The following theorem bounds the approximation error of Algorithm 1. Theorem 3. Assume that Assumption 1 holds. Let {X s } s≥1 be a sequence generated by Algorithm 1 with parameters T = 8β 3α ln 8 + 1, η t = α 2β , k s = 32σ 2 αC 0 2 s−1 and k t = 32β 2 G α 2 , where C 0 ≥ h 1 . Then, for all s ≥ 1 it holds that: E[f (X s )] − f (X * ) ≤ C 0 1 2 s−1 + 8αδ 7 .(14) Proof. Let us define h s : = E[f (X s )] − f (X * ) for all s ≥ 1, and h s,t := E[f (X s,t )] − f (X * ) for all s, t ≥ 1. Fix some epoch s and iteration t of the inner loop. Using Corollary 1 and Lemma 2 we have that h s,t+1 ≤ (1 − η t ) h s,t + 1 2β 8β 2 G αk t h s + 8β 2 G αk t h s,t + 2σ 2 k s + βη 2 t δ = 1 − η t + 4β 2 G αβk t h s,t + 4β 2 G αβk t h s + σ 2 βk s + βη 2 t δ . Plugging k t = 16β 2 G αβηt we get h s,t+1 ≤ 1 − η t + η t 4 h s,t + η t 4 h s + σ 2 βk s + βη 2 t δ . Plugging η t = α 2β we get h s,t+1 ≤ 1 − 3α 8β h s,t + α 8β h s + σ 2 βk s + α 2 δ 4β . Fixing an epoch s and unrolling the recursion for t = (T − 1) . . . 1 we get h s,T ≤ 1 − 3α 8β h s,T −1 + α 8β h s + σ 2 βk s + α 2 δ 4β ≤ ... ≤ 1 − 3α 8β T −1 h s,1 + α 8β h s + σ 2 βk s + α 2 δ 4β T −1 k=1 1 − 3α 8β T −k−1 = 1 − 3α 8β T −1 h s,1 + 1 3 h s + 8σ 2 3αk s + 2αδ 3 1 − 1 − 3α 8β T −1 . h s,T = h s+1 and h s,1 = h s and so h s+1 ≤ 1 − 3α 8β T −1 h s + 1 3 h s + 8σ 2 3αk s + 2αδ 3 1 − 1 − 3α 8β T −1 = 1 3 + 2 3 1 − 3α 8β T −1 h s + 8σ 2 3αk s + 2αδ 3 1 − 1 − 3α 8β T −1 ≤ 1 3 + 2 3 e − 3α 8β (T −1) h s + 8σ 2 3αk s + 2αδ 3 1 − 1 − 3α 8β T −1 . Choosing T = 8β 3α ln 8 + 1, we get h s+1 ≤ 1 3 + 2 3 e − 3α 8β ( 8β 3α ln 8) h s + 8σ 2 3αk s + 2αδ 3 1 − 1 − 3α 8β 8β 3α ln 8 = 5 12 h s + 8σ 2 3αk s + 2αδ 3 1 − 1 − 3α 8β 8β 3α ln 8 ≤ 5 12 h s + 8σ 2 3αk s + 2αδ 3 .(15) Now, we use induction over s to prove our claimed bound h s ≤ C 0 1 2 s−1 + 8αδ 7 .(16) The base case s = 1, follows from the choice C 0 ≥ h 1 . For s ≥ 1 using (15) with k s = 32σ 2 αC 0 2 s−1 we get, h s+1 ≤ 5 12 h s + C 0 12 1 2 s−1 + 2αδ 3 . Using the induction hypothesis for h s in (16) gives us h s+1 ≤ 5 12 C 0 1 2 s−1 + 5 12 8αδ 7 + C 0 12 1 2 s−1 + 2αδ 3 = C 0 1 2 s + 8αδ 7 . We now prove Theorem 1, which is a direct corollary of Theorem 3. Proof of Theorem 1. By Theorem 3 it is implied that to achieve an ε-expected error, it suffices to fix δ = 7ǫ 16α and to complete S = log 2 C 0 ε + 2 epochs of Algorithm 1. For this number of epochs we upper bound the overall number of stochastic gradients as follows. S s=1 k s = 32σ 2 αC 0 S s=1 2 s−1 = 32σ 2 αC 0 2 S − 1 = 32σ 2 αC 0 2 log 2 C 0 ε +2 − 1 ≤ 128σ 2 α 1 ǫ . (17) S s=1 T t=1 k t = S s=1 T t=1 32β 2 G α 2 = 32β 2 G α 2 8β 3α ln 8 + 1 log 2 C 0 ε + 2 .(18) All together, S s=1 k s + S s=1 T t=1 k t ≤ 128σ 2 α 1 ǫ + 32β 2 G α 2 8β 3α ln 8 + 1 log 2 C 0 ε + 2 .(19) Finite-sum setting In this section we assume that G(X) from Problem (3) is in the form of a finite sum, i.e. G(X) = 1 n n i=1 g i (X). The stochastic oracle in this setting simply samples a function g i (X), i ∈ [n], uniformly at random. In this case, in the outer loop of Algorithm 1 we takẽ ∇g(X) = 1 n n i=1 ∇g i (X) = ∇G(X). Algorithm 2 Finite-Sum Variance-Reduced Generalized Conditional Gradient Input: T , {η t } T −1 t=1 ⊂ [0, 1], {k t } T −1 t=1 , {k s } s≥1 ⊂ N, δ ≥ 0. Initialization: Choose some X 1 ∈ dom(h). for s = 1, 2, ... dõ ∇g(X s ) = 1 n n i=1 ∇g i (X s ) {snap-shot gradient}. X s,1 = X s for t = 1, 2, ..., T − 1 do Sample g (1) , ..., g (kt) from D. Define∇g(X s,t ) = 1 kt kt i=1 ∇g (i) (X s,t ) − ∇g (i) (X s ) −∇g(X s ) . V t = arg min V∈V ψ t (V) := V − X s,t + 1 2βηt (∇g(X s,t ) + ∇R(X s,t )) 2 F + 1 βηt h(V) {in fact it suffices that ψ t (V t ) ≤ ψ t (X * ) + δ for some optimal solution X * }. X s,t+1 = (1 − η t )X s,t + η t V t end for X s+1 = X s,T end for The following theorem is analogous to Theorem 3 and bounds the approximation error of Algorithm 2. Theorem 4. Assume that Assumption 1 holds. Let {X s } s≥1 be a sequence generated by Algorithm 2. Then, Algorithm 2 with T = 8β 3α ln 8 + 1 iterations of the inner loop at each epoch s, a step size of η t = α 2β , and k t = 32β 2 G α 2 gradients implemented by the stochastic oracle at inner loop iterations t, such that C 0 ≥ h 1 , for all s ≥ 1 guarantees that: E[f (X s )] − f (X * ) ≤ C 0 5 12 s−1 + 8αδ 7 . Proof. Since∇g(X) = 1 n n i=1 ∇g i (X) = ∇G(X), we get that E[ ∇G(X s ) −∇g(X s ) 2 ] = 0. Using this inequality instead of (13) in the proof of Lemma 2, directly gives us the improved bound: E[ ∇G(X s,t ) −∇g(X s,t ) 2 ] ≤ 8β 2 G αk t (E[f (X s )] − f (X * )) + 8β 2 G αk t (E[f (X s,t )] − f (X * )). We define h s , h s,t for all s, t ≥ 0 as in the proof of Theorem 3. Plugging the above new bound into Corollary 1, we get h s,t+1 ≤ (1 − η t ) h s,t + 1 2β 8β 2 G αk t h s + 8β 2 G αk t h s,t + βη 2 t δ = 1 − η t + 4β 2 G αβk t h s,t + 4β 2 G αβk t h s + βη 2 t δ. From here the rest of the proof closely follows that of Theorem 3. Taking k t = 16β 2 G αβηt , h s,t+1 ≤ 1 − η t + η t 4 h s,t + η t 4 h s + βη 2 t δ. Taking η t = α 2β we get h s,t+1 ≤ 1 − 3α 8β h s,t + α 8β h s + α 2 δ 4β . Unrolling the recursion for all t in epoch s: h s,T ≤ 1 − 3α 8β h s,T −1 + α 8β h s + α 2 δ 4β ≤ ... ≤ 1 − 3α 8β T −1 h s,1 + α 8β h s + α 2 δ 4β T −1 k=1 1 − 3α 8β T −k−1 = 1 − 3α 8β T −1 h s,1 + 1 3 h s + 2αδ 3 1 − 1 − 3α 8β T −1 . h s,T = h s+1 and h s,1 = h s and so h s+1 ≤ 1 − 3α 8β T −1 h s + 1 3 h s + 2αδ 3 1 − 1 − 3α 8β T −1 = 1 3 + 2 3 1 − 3α 8β T −1 h s + 2αδ 3 ≤ 1 3 + 2 3 e − 3α 8β (T −1) h s + 2αδ 3 . Choosing T = 8β 3α ln 8 + 1, we get h s+1 ≤ 1 3 + 2 3 e − 3α 8β ( 8β 3α ln 8) h s + 2αδ 3 = 5 12 h s + 2αδ 3 . Using the same induction argument as in the proof of Theorem 3, we conclude that for all s: h s+1 ≤ 5 12 s C 0 + 2αδ 3 s k=1 5 12 s−k = 5 12 s C 0 + 24αδ 21 1 − 5 12 s ≤ 5 12 s C 0 + 8αδ 7 . We now prove Theorem 2, which is a direct corollary of Theorem 4. Proof of Theorem 2. By Theorem 4 it is implied that to achieve an ε-expected error, setting δ = 7ǫ 16α , we need to compute at most S = log 12 5 2C 0 ε + 1 epochs of Algorithm 2. Therefore, the overall number of exact gradients to be computes is at most S s=1 n = n log 12 5 2C 0 ε + 1 , and the overall number of stochastic gradients is at most S s=1 T t=1 k t = S s=1 T t=1 32β 2 G α 2 = 32β 2 G α 2 8β 3α ln 8 + 1 log 12 5 2C 0 ε + 1 . Applications to Non-smooth Problems In this section we turn to discuss applications of our results to non-smooth problems. Concretely, we consider composite models which take the form of Model (3), with the difference that we now assume that the function R(X) is non-smooth, however, admits a known smoothing scheme. We then discuss in detail three concrete applications of interest: recovering a simultaneously low-rank and sparse matrix, recovering a low-rank matrix subject to linear constraints, and recovering a low-rank and sparse matrix from linear measurements with the elastic-net regularizer. Applying our results to non-smooth problems via smoothing In order to fit the nonsmooth problems considered in this section to our smooth model (3), we build on the smoothing framework introduced in [2], which replaces the nonsmooth term R(X) with a smooth approximation. The following definition is taken from [2]. Definition 1. Let R : V → (−∞, ∞] be a closed, proper and convex function and let X ⊆ dom(R) be a closed and convex set. R is (θ, γ, K)-smoothable over X if there exists γ 1 and γ 2 such that γ = γ 1 + γ 2 ≥ 0 such that for every µ ≥ 0 there exists a continuously differentiable function R µ : V → (−∞, ∞] such that: (a) R(x) − γ 1 µ ≤ R µ (x) ≤ R(x) + γ 2 µ for every x ∈ X. (b) There exists K ≥ 0 and θ ≥ 0 such that ∇R µ (x) − ∇R µ (x) ≤ K + θ µ x − y for every x, y ∈ X. Formally, now we consider applying our algorithms to non-smooth optimization problems of the following form: min X∈V f (X) := G(X) + R(X) + h(X),(20) with the following assumptions (replacing Assumption 1): Assumption 2. • G is stochastic, i.e., G(X) = E g∼D [g(X)], where D is a distribution over functions g : V → R, given by a sampling oracle. G is convex and differentiable, and for all g ∈ supp(D), g is β G -smooth, and there exists σ ≥ 0 such that σ ≥ sup X∈V E[ ∇G(X) − ∇g(X) 2 ]. • R : V → (−∞, ∞] is deterministic, (θ, γ, K)-smoothable, and convex. • G + R is α-strongly convex. • h : V → (−∞, ∞] is deterministic, non-smooth, proper, lower semicontinuous and convex. We will denote the µ-smooth approximation of R(X) as R µ (X), and its smoothness parameter to be β R = K + θ µ . As in our discussions so far, considering Model (20) especially in the context of low-rank matrix optimization problems (e.g., h(·) is an indicator function for a nuclear-norm ball or the trace-bounded positive semidefinite cone, or an analogous regularization function), we assume that the optimal solution X * is naturally of low-rank and we want to rely on SVD computations whose rank does not exceeds that of X * -the optimal solution to the original non-smooth problem. However, when put in the context of this section and considering Model (20), the rank of SVD computations required by the results developed in previous sections corresponded to the optimal solution of the smoothed problem, i.e., after R(·) is replaced with a smooth approximation R µ (·). In particular, it can very much be the case, that even though the optimal solution to the smooth problem is very close (both in norm and in function value) to the optimal solution of the non-smooth problem, its rank is much higher. Thus, in this section, towards developing an algorithm that relies on SVD computation with rank at most that of the non-smooth optimum, we introduce the following modified definition of a weak-proximal oracle. Definition 2. We say an Algorithm A is a (δ 1 , δ 2 )-weak proximal oracle for Model (20),if for point X ∈ dom(h) and step-size η, A(X, η) returns a point V ∈ dom(h) such that ψ(V, X, η) ≤ ψ(X * , X, η) + δ 1 , whereX * is a feasible point satisfying |f (X * ) − f (X * )| ≤ δ 2 , ψ(V, X, η) := V − X + 1 2βη t (∇g(X) + ∇R µ (X)) 2 + 1 βη t h(V), and R µ (·) is the µ-smooth approximation of R(·). Henceforth, we consider Algorithm 1 with the single difference: now V t is the ouput of a (δ 1 , δ 2 )-weak proximal oracle, as defined in Definition 2. Note that in the context of low-rank problems and in the ideal case δ 1 = δ 2 = 0 3 , the implementation of the oracle in Definition 2 is exactly the same as the weak proximal oracle discussed before, i.e., if h(·) is for instance the indicator function for a radius-τ nuclear-norm ball, then implementing the oracle in Definition 2 amounts to a Euclidean projection of the rank(X * )-approximation of A t := X − 1 2βηt (∇g(X) + ∇R µ (X)) onto the nuclear-norm ball. Here, the tolerances δ 1 , δ 2 allow us to absorb the error due to the smoothing approximation and numerical errors in SVD computations. The following theorem is analogues to Theorem 3. Theorem 5. Assume that Assumption 2 holds. Let {X s } s≥1 be a sequence generated by Algorithm 1 when applied to the smooth approximation of Problem (20), and let X * denote the optimal solution of the non-smooth problem. Then, using the parameters T = 8β 3α ln 8 + 1, η t = α 2β , k s = 32σ 2 αC 0 2 s−1 and k t = 32β 2 G α 2 for C 0 such that C 0 ≥ h 1 , guarantees that for all s ≥ 1: E[f (X s )] − f (X * ) ≤ C 0 1 2 s−1 + 8 7 αδ 1 + 23 7 γµ. Proof. Denote the smoothed function by f µ (X) := G(X) + R µ (X) + h(X). Let X * and X * µ denote the optimal solutions of the non-smooth and smoothed functions respectively. By applying Algorithm 1 to f µ (X), such that at each iteration V t is chosen as a point that satisfies ψ t (V t ) ≤ ψ t (X * ) + δ 1 , we get according to Lemma 1 E[f µ (X s,t+1 )] ≤ (1 − η t ) E[f µ (X s,t )] + η t f µ (X * ) + σ 2 s,t 2β + βη 2 t δ 1 .(21) We notice that by the definition of the smoothing and optimality of X * , f µ (X * ) ≤ f (X * ) + γ 2 µ ≤ f (X * µ ) + γ 2 µ ≤ f µ (X * µ ) + γµ.(22) By plugging (22) into (21) and subtracting f µ (X * µ ) from both sides we get E[f µ (X s,t+1 )] − f µ (X * µ ) ≤ (1 − η t ) (E[f µ (X s,t )] − f µ (X * µ )) + σ 2 s,t 2β + βη 2 t δ 1 + η t γµ. Following the proof of Theorem 3 with δ = δ 1 + µγ βηt gives us E[f µ (X s )] − f µ (X * µ ) ≤ C 0 1 2 s−1 + 8 7 (αδ 1 + 2γµ) .(23) Using the optimality of X * µ and the definition of the smoothing we get, E[f µ (X s )] − f µ (X * µ ) ≥ E[f µ (X s )] − f µ (X * µ ) ≥ E[f (X s )] − f (X * µ ) − γµ.(24) Combining (23) and (24) we obtain E[f (X s )] − f (X * ) ≤ C 0 1 2 s−1 + 8 7 αδ 1 + 23 7 γµ. Corollary 2. Assume that Assumption 2 holds. Applying Theorem 5 with the parameters δ 1 = 7ε 32α and µ = 7ε 92γ , guarantees that the overall number of epochs to reach an ǫapproximated solution in expectation is bounded by O ln 1 ε , the total number of calls to the (δ 1 , δ 2 )-weak proximal oracle is bounded by O β α ln 1 ε , and the total number of stochastic gradients sampled is bounded by O σ 2 αε + β 2 G β α 3 ln 1 ε . Proof. By Theorem 5 it is implied that to achieve an ε-stochastic error E[f (X S )]−f (X * ) ≤ ε we need to compute S ≥ log 2 C 0 ε + 2 iterations. The rest follows from the calculations brought in (17), (18), (19). Specific examples We now discuss several applications of Corollary 2 to specific problems. Example 1: Low-rank and sparse matrix estimation As discussed in the introduction, this work is largely motivated by matrix recovery problems, such as low-rank and sparse matrix estimation. In order to show the application of our algorithm for this matrix estimation problem, we state a corresponding optimization problem: min X * ≤τ 1 2 X − E M∼D [M] 2 F + λ X 1 ,(25) where D is an unknown distribution over instances. For problem (25) to fit the Model (20), we take G(X) = E (M,N)∼D×D 1 2 X − M, X − N .(26) Since M and N are i.i.d, this is equivalent to G(X) = 1 2 E M∼D [X − M], E N∼D [X − N] = 1 2 X − E M∼D [M], X − E M∼D [M] = 1 2 X − E M∼D [M] 2 F . It should be noted that for this function G(X), the stochastic gradients are of the form ∇g (i) (X) = X − M for some M ∼ D. As a result, for a fixed epoch s and iteration t, we have∇ g(X s,t ) = 1 k t kt i=1 X s,t − X s +∇g(X s ) . As can be seen,∇g(X s,t ) is independent of the stochastic samples within the inner-loop (since they cancel-out), and therefore we can simple set k t = 0. Smoothing the ℓ 1 -norm has a well known solution, as shown in [2]. The µ-smooth approximation of X 1 is R µ (X) = d j=1 m i=1 H µ (X ij ), with parameters (1, md 2 , 0), where H µ (t) is the one dimensional Huber function, defined as: H µ (t) = t 2 2µ , |t| ≤ µ |t| − µ 2 , |t| > µ . This satisfies R µ (X) ≤ X 1 ≤ R µ (X) + mdµ 2 .(27) h(X) is to be taken to be the indicator over the nuclear norm ball, i.e., h(X) = χ C , where C = {X ∈ R m×d : X * ≤ τ }. Corollary 3. Consider running Algorithm 1 for the smooth approximation of Problem (25), with parameters T = 8 ln 8 3 λ µ + 1 + 1, η t = µ 2λ+2µ , k s = 32E[ M −E[M ] 2 ] C 0 2 s−1 and k t = 32. Let X * denote the optimal solution Problem (25). Then, running S ≥ log 2 C 0 ε + 2 epochs of the outer-loop guarantees that: E[f (X S )] − f (X * ) ≤ ε 4 + 8 7 δ 1 + 23md 14 µ.(28) In particular, taking a smoothing parameter of µ = 7ε 46md and δ 1 = 7ε 32 , we obtain E[f (X S )] − f (X * ) ≤ ε.(29) Proof. The parameters of the problem are as follows: α = 1, β G = 1, γ = md 2 , β R = λ µ , σ 2 = E[ M − E[M ] 2 ] . Therefore, by Theorem 5 we get the result in (28). By choosing µ = 7ε 46md and δ 1 = 7ε 32 , the result in (29) is immediate. Example 2: Linearly constrained low-rank matrix estimation Another example, is the problem of recovering a low-rank matrix subject to linear constraints, which can be written in penalized form as: min X * ≤τ 1 2 X − E M∼D [M] 2 F + max i∈[n] ( A i , X − b i ),(30) where D is again an unknown distribution over instances. Here the matrices {A i } i∈ [n] and scalars {b i } i∈[n] can absorb a penalty factor λ. Here, by [2], the µ-smooth approximation of max i∈ [n] ( A i , X − b i ) is R µ (X) = µ log n i=1 e 1 µ ( A i ,X −b i ) , with parameters ( A 2 , log n, 0), where A : R m×d → R n is a linear transformation with the form A(X) = tr(A T 1 X), tr(A T 2 X), . . . , tr(A T n X) ⊤ , for A 1 , ..., A n ∈ R m×d , and A = max{ A(X) 2 : X F = 1}. This satisfies R µ (X) ≤ max i∈[n] ( A i , X − b i ) ≤ R µ (X) + µ log n.(31) In this case, G(X) and h(X) are as in Example 1. µ + 1 + 1, η t = µ 2 A 2 +2µ , k s = 32E[ M −E[M ] 2 ] C 0 2 s−1 , k t = 32, and let X * denote the optimal solution to Problem (30). Then, running S ≥ log 2 C 0 ε + 2 epochs of the outer-loop guarantees that: E[f (X S )] − f (X * ) ≤ ε 4 + 8 7 δ 1 + 23 log n 7 µ.(32) In particular, taking a smoothing parameter of µ = 7ε 92 log n and δ 1 = 7ε 32 , we obtain E[f (X S )] − f (X * ) ≤ ε.(33) Proof. The parameters of the problem are as follows: α = 1, β G = 1, γ = log n, β R = A 2 µ , σ 2 = E[ M − E[M ] 2 ] . Therefore, by Theorem 5 we get the result in (32). By choosing µ = 7ε 92 log n and δ 1 = 7ε 32 , the result in (33) is immediate. Example 3: Recovering a low-rank and sparse matrix from linear measurements with elastic-net regularization Finally, we very briefly discuss a matrix-sensing problem, where both a nuclear-norm constraint is used to promote low-rank solutions and the well known elastic-net regularizer [20] is used to promote sparsity. min X * ≤τ E (A,b)∼D 1 2 ( A, X − b) 2 + λ 1 X 1 + λ 2 X 2 F .(34) In this example, G(X) = E (A,b)∼D 1 2 ( A, X − b) 2 need not be strongly convex as in previous examples, however the elastic-net regularizer R(X) := λ 1 X 1 + λ 2 X 2 F is strongly convex. The smoothing of Problem (34) and resulting application of our method goes along the same lines as our treatment of Problem (25). Experiments In support of our theory, in this section we present preliminary empirical experiments on the problem of low-rank and sparse matrix estimation, Problem (25). We compare our Algorithm 1 (SVRGCG) to previous conditional gradient-type stochastic methods including the Stochastic Conditional Gradient Algorithm (SCG) [6] 4 and the Stochastic Conditional Gradient Sliding Algorithm (SCGS) [12]. We use synthetic randomly-generated data for the experiments. For all experiments the input matrix is of the form M 0 = E M∼D [M] = YY ⊤ + N, where Y ∈ R d×r is a random sparse matrix for which each entry is zero w.p. 1 − 1/ √ d and U {1, . . . , 10} w.p. 1/ √ d, N is a d × d random matrix with i.i.d. standard Gaussian entries. We set the dimension to d = 300 and the rank of Y, r to either 1 or 10. In all experiments we set λ = 2, ε = 0.01 · YY ⊤ 2 F (i.e., the approximation error is relative to magnitude of signal), µ = ε/d 2 (in accordance with Corollary 3) , and τ = Tr(YY ⊤ ). The stochastic oracle is implemented by taking noisy observations of M 0 using: M (i) = M 0 + σQ (i) , where each Q (i) is random with i.i.d. standard Gaussian entries and we fix σ = 5. For all three methods we measure i) the obtained (original non-smooth) function value (see (25)) vs. number of stochastic gradients used, ii) function value vs. overall runtime (seconds), and iii) function value vs. overall number of rank-one SVD computations used. Since the overall running time is highly dependent on specific implementation, we bring the number of rank-one SVD computations as an implementation-independent proxy for the overall runtime. For our method SVRGCG, we compute the overall number of rankone SVD computations by multiplying the number of SVD factorizations with the rank of the factorization used 5 . In the first experiment ( Figure 1) we set rank(Y) = 1 in which case, all three algorithms use only rank-one SVD computations. In a second experiment (Figure 2), we set rank(Y) = 10 in which case, algorithms SCG, SCGS still use only rankone SVD computations, whereas our algorithm SVRGCG uses rank-10 SVD computations (hence the left panel in Figure 2 counts 10 times the number of thin-SVD computations used by our algorithm). The results for each experiment are averages of 30 i.i.d. runs. Importantly, all three algorithms were implemented as suggested by theory without attempts to optimize their performance, with only two exceptions. First, in our algorithm SVRGCG we use the rank of Y to set the rank of SVD computations (since naturally YY ⊤ should be close to the optimal solution). Second, in [12] it is suggested to run the conditional gradient method in order to solve proximal sub-problems in algorithm SCGS until a certain quantity, which serves as a certificate for the quality of the solution is reached. However, we observe that in practice, obtaining this certificate takes unreason-able amount of iterations which renders the overall method highly suboptimal w.r.t. the alternatives. Hence in our implementation we limit the number of CG inner iterations to the dimension d. The results are presented in Figures 1 and 2. It can be seen that our algorithm SVRGCG clearly outperforms both SCG and SCGS with respect to all three measures in the two experiments. total number of gradients computed for any of the n functions in the support of D is bounded by: Corollary 4 . 4Consider running Algorithm 1 for the smooth approximation of Problem (30), with parameters T = Figure 1 : 1Comparison between methods with rank(YY ⊤ ) = 1. Figure 2 : 2Comparison between methods with rank(YY ⊤ ) = 10. Table 2 : 2Comparison of complexity bound for conditional gradient-type methods for solving Model (3). X * denotes the unique optimal solution. Table only lists the leading- order terms. Corollary 1. Assume that Assumption 1 holds. Fix some epoch s of Algorithm 1, and let {X s,t } T +1 t=1 be the iterates generated throughout the epoch. Then, if 2βη t ≤ α, we have that see for instance discussions in[7]. in particular, this suboptimal sample complexity will scale both with β -the overall gradient Lipschitz parameter and with 1/ǫ, whereas the optimal sample complexity is independent of β (which as we recall, is typically quite large in our setting due to R(·)). these can be made arbitarily small by the choice of smoothing parameter and accuracy in SVD computations. In[6] it appears as Stochastic Frank-Wolfe (SFW).5 This is reasonable since the runtime for low-rank SVD typically scales linearly with rank. Acknowledgments:We would like to thank Shoham Sabach for many fruitful discussions throughout the preparation of this manuscript. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. Amir Beck, Marc Teboulle, SIAM journal on imaging sciences. 21Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183-202, 2009. Smoothing and first order methods: a unified framework. Amir Beck, Marc Teboulle, SIAM Journal on Optimization. 222Amir Beck and Marc Teboulle. Smoothing and first order methods: a unified frame- work. SIAM Journal on Optimization, 22(2):557-580, 2012. Sébastien Bubeck, Convex optimization: Algorithms and complexity. Foundations and Trends R in Machine Learning. 8Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Founda- tions and Trends R in Machine Learning, 8(3-4):231-357, 2015. Competing with the empirical risk minimizer in a single pass. Roy Frostig, Rong Ge, M Sham, Aaron Kakade, Sidford, Conference on learning theory. Roy Frostig, Rong Ge, Sham M Kakade, and Aaron Sidford. Competing with the empirical risk minimizer in a single pass. In Conference on learning theory, pages 728-763, 2015. Faster projection-free convex optimization over the spectrahedron. Dan Garber, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Barcelona, SpainDan Garber. Faster projection-free convex optimization over the spectrahedron. In Advances in Neural Information Processing Systems 29: Annual Conference on Neu- ral Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 874-882, 2016. Variance-reduced and projection-free stochastic optimization. Elad Hazan, Haipeng Luo, International Conference on Machine Learning. Elad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic opti- mization. International Conference on Machine Learning, pages 1263-1271, 2016. Linear convergence of a frank-wolfe type algorithm over trace norm balls. Zeyuan Allen-Zhu, Elad Hazan, Wei Hu, Yuanzhi Li, NIPS. Zeyuan Allen-Zhu, Elad Hazan, Wei Hu and Yuanzhi Li. Linear convergence of a frank-wolfe type algorithm over trace norm balls. NIPS, pages 6192-6201, 2017. Revisiting frank-wolfe: Projection-free sparse convex optimization. Martin Jaggi, Proceedings of the 30th International Conference on Machine Learning, ICML. the 30th International Conference on Machine Learning, ICMLMartin Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. Proceedings of the 30th International Conference on Machine Learning, ICML, pages 427-435, 2013. Frank-wolfe algorithms for saddle point problems. Tony Gauthier Gidel, Simon Jebara, Lacoste-Julien, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. the 20th International Conference on Artificial Intelligence and StatisticsGauthier Gidel, Tony Jebara and Simon Lacoste-Julien. Frank-wolfe algorithms for saddle point problems. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, pages 362-371, 2017. Frank-wolfe splitting via augmented lagrangian method. Tony Gauthier Gidel, Simon Jebara, Lacoste-Julien, International Conference on Artificial Intelligence and Statistics. Gauthier Gidel, Tony Jebara and Simon Lacoste-Julien. Frank-wolfe splitting via augmented lagrangian method. International Conference on Artificial Intelligence and Statistics, AISTATS 2018, pages 1456-1465, 2018. Accelerating stochastic gradient descent using predictive variance reduction. Rie Johnson, Tong Zhang, Advances in neural information processing systems. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predic- tive variance reduction. In Advances in neural information processing systems, pages 315-323, 2013. Conditional gradient sliding for convex optomization. Guanghui Lan, Yi Zhou, SIAM Journal on Optimization. 262Guanghui Lan and Yi Zhou. Conditional gradient sliding for convex optomization. SIAM Journal on Optimization, 26(2):1379-1409, 2016. A conditional gradient framework for composite convex minimization with applications to semidefinite programming. Alp Yurtsever, Olivier Fercoq, Francesco Locatello, Volkan Cevher, Proceedings of the 35th International Conference on Machine Learning, ICML 2018. the 35th International Conference on Machine Learning, ICML 2018Alp Yurtsever, Olivier Fercoq, Francesco Locatello and Volkan Cevher. A conditional gradient framework for composite convex minimization with applications to semidef- inite programming. Proceedings of the 35th International Conference on Machine Learning, ICML 2018, pages 5713-5722, 2018. Scalable robust matrix recovery: Frank-wolfe meets proximal methods. Cun Mu, Yuqian Zhang, John Wright, Donald Goldfarb, SIAM Journal on Scientific Computing. 385Cun Mu, Yuqian Zhang, John Wright, and Donald Goldfarb. Scalable robust ma- trix recovery: Frank-wolfe meets proximal methods. SIAM Journal on Scientific Computing, 38(5):A3291-A3317, 2016. Yurii Nesterov, Introductory lectures on convex optimization: A basic course. Springer Science & Business Media87Yurii Nesterov. Introductory lectures on convex optimization: A basic course, vol- ume 87. Springer Science & Business Media, 2013. Fast generalized conditional gradient method with applications to matrix recovery problems. Dan Garber, Shoham Sabach, Atara Kaplan, abs/1802.05581CoRRDan Garber, Shoham Sabach and Atara Kaplan. Fast generalized conditional gradi- ent method with applications to matrix recovery problems. CoRR, abs/1802.05581, 2018. Estimation of simultaneously sparse and low rank matrices. Emile Richard, Nicolas Pierre-Andr&apos;e Savalle, Vayatis, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningEmile Richard, Pierre-Andr'e Savalle and Nicolas Vayatis. Estimation of simultane- ously sparse and low rank matrices. Proceedings of the 29th International Conference on Machine Learning, 2012. Provable subspace clustering: When lrr meets ssc. Yu-Xiang Wang, Huan Xu, Chenlei Leng, Advances in Neural Information Processing Systems. Yu-Xiang Wang, Huan Xu, and Chenlei Leng. Provable subspace clustering: When lrr meets ssc. In Advances in Neural Information Processing Systems, pages 64-72, 2013. Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. Ke Zhou, Hongyuan Zha, Le Song, Artificial Intelligence and Statistics. Ke Zhou, Hongyuan Zha, and Le Song. Learning social infectivity in sparse low-rank networks using multi-dimensional hawkes processes. In Artificial Intelligence and Statistics, pages 641-649, 2013. Regularization and variable selection via the elastic net. Hui Zou, Trevor Hastie, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 672Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301-320, 2005.
[]
[ "Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment *", "Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment *" ]
[ "Eli Ben-Michael [email protected]:ebenmichael.github.io ", "D James Greiner ", "Kosuke Imai [email protected]:https:@imai.fas.harvard.edu ", "Zhichao Jiang ", "\nDepartment of Government and Department of Statistics\nInstitute for Quantitative Social Science\nDepartment of Biostatistics and Epidemiology\n‡ Honorable S. William Green Professor of Public Law, Harvard Law School\nHarvard University\n1525 Massachusetts Avenue, Griswold 504, 1737 Cambridge Street02138, 02138., 02138Oxford Street, Cambridge, Cambridge, CambridgeMA, MA, MA\n", "\nUniversity of Massachusetts\nAmherstMA\n" ]
[ "Department of Government and Department of Statistics\nInstitute for Quantitative Social Science\nDepartment of Biostatistics and Epidemiology\n‡ Honorable S. William Green Professor of Public Law, Harvard Law School\nHarvard University\n1525 Massachusetts Avenue, Griswold 504, 1737 Cambridge Street02138, 02138., 02138Oxford Street, Cambridge, Cambridge, CambridgeMA, MA, MA", "University of Massachusetts\nAmherstMA" ]
[]
Algorithmic recommendations and decisions have become ubiquitous in today's society. Many of these and other data-driven policies, especially in the realm of public policy, are based on known, deterministic rules to ensure their transparency and interpretability. For example, algorithmic pre-trial risk assessments, which serve as our motivating application, provide relatively simple, deterministic classification scores and recommendations to help judges make release decisions. How can we use the data based on existing deterministic policies to learn new and better policies? Unfortunately, prior methods for policy learning are not applicable because they require existing policies to be stochastic rather than deterministic. We develop a robust optimization approach that partially identifies the expected utility of a policy, and then finds an optimal policy by minimizing the worst-case regret. The resulting policy is conservative but has a statistical safety guarantee, allowing the policy-maker to limit the probability of producing a worse outcome than the existing policy. We extend this approach to common and important settings where humans make decisions with the aid of algorithmic recommendations. Lastly, we apply the proposed methodology to a unique field experiment on pre-trial risk assessment instruments. We derive new classification and recommendation rules that retain the transparency and interpretability of the existing instrument while potentially leading to better overall outcomes at a lower cost.
null
[ "https://arxiv.org/pdf/2109.11679v3.pdf" ]
237,635,349
2109.11679
ce62d068fddb08b59a1650bd83a2ffb9cffc29a2
Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment * 15 Feb 2022 Eli Ben-Michael [email protected]:ebenmichael.github.io D James Greiner Kosuke Imai [email protected]:https:@imai.fas.harvard.edu Zhichao Jiang Department of Government and Department of Statistics Institute for Quantitative Social Science Department of Biostatistics and Epidemiology ‡ Honorable S. William Green Professor of Public Law, Harvard Law School Harvard University 1525 Massachusetts Avenue, Griswold 504, 1737 Cambridge Street02138, 02138., 02138Oxford Street, Cambridge, Cambridge, CambridgeMA, MA, MA University of Massachusetts AmherstMA Safe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment * 15 Feb 2022First draft: September 27, 2021 This draft: February 17, 2022(CG# 2370386), National Science Foundation (SES-2051196), Sloan Foundation (Economics Program; 2020-13946), and Arnold Ventures. We thank anonymous reviewers of the IQSS's Alexander and Diviya Magaro Peer Pre-Review Program for useful feedback. † Corresponding author. Postdoctoral Fellow, Institute for Quantitative Social Science, Harvard University. 1algorithm-assisted decision-makingobservational studiesoptimal policy learningrandomized experimentsrobust optimization * Algorithmic recommendations and decisions have become ubiquitous in today's society. Many of these and other data-driven policies, especially in the realm of public policy, are based on known, deterministic rules to ensure their transparency and interpretability. For example, algorithmic pre-trial risk assessments, which serve as our motivating application, provide relatively simple, deterministic classification scores and recommendations to help judges make release decisions. How can we use the data based on existing deterministic policies to learn new and better policies? Unfortunately, prior methods for policy learning are not applicable because they require existing policies to be stochastic rather than deterministic. We develop a robust optimization approach that partially identifies the expected utility of a policy, and then finds an optimal policy by minimizing the worst-case regret. The resulting policy is conservative but has a statistical safety guarantee, allowing the policy-maker to limit the probability of producing a worse outcome than the existing policy. We extend this approach to common and important settings where humans make decisions with the aid of algorithmic recommendations. Lastly, we apply the proposed methodology to a unique field experiment on pre-trial risk assessment instruments. We derive new classification and recommendation rules that retain the transparency and interpretability of the existing instrument while potentially leading to better overall outcomes at a lower cost. Introduction Algorithmic recommendations and decisions are ubiquitous in our daily lives, ranging from online shopping to job interview screening. Many of these algorithm-assisted, or simply data-driven, policies are also used for highly consequential decisions including those in the criminal justice system, social policy, and medical care. One important feature of such policies is that they are often based on known, deterministic rules. This is because transparency and interpretability are required to ensure accountability especially when used for public policy-making. Examples include eligibility requirements for government programs (e.g., Canadian permanent residency program; Supplemental Nutrition Assistance Program or SNAP, Center on Budget and Policy Priorities, 2017) and recommendations for medical treatments (e.g., MELD score for liver transplantation, Kamath et al., 2001). The large amounts of data collected after implementing such deterministic policies provide an opportunity to learn new policies that improve on the status quo. Unfortunately, prior approaches for policy learning are not applicable because they require existing policies to be stochastic, typically relying on inverse probability-of-treatment weighting. To address this challenge, we propose a robust optimization approach that finds an improved policy without inadvertently leading to worse outcomes. To do this, we partially identify the expected utility of a policy by calculating all potential values consistent with the observed data, and find the policy that maximizes the expected utility in the worst case. The resulting policy is conservative but has a statistical safety guarantee, allowing the policy-maker to limit the probability for yielding a worse outcome than the existing policy. We formally characterize the gap between this safe policy and the infeasible oracle policy as a function of restrictions imposed on the class of outcome models as well as on the class of policies. After developing the theoretical properties of the safe policy in the population, we show how to empirically construct the safe policy from the data at hand and analyze its statistical properties. We then provide details about the implementation in several representative cases. We also consider two extensions that directly address the common settings, including our application, where a deterministic policy is experimentally evaluated against a "null policy," and humans ultimately make decisions with the aid of algorithmic recommendation. The availability of experimental data weakens the required assumptions while human decisions add extra uncertainty. Our motivating empirical application is the use of pre-trial risk assessment instruments in the American criminal justice system. The goal of a pre-trial risk instrument is to aid judges in deciding which arrestees should be released pending disposition of any criminal charges. Algorithmic recommendations have long been used in many jurisdictions to help judges make release and sentencing decisions. A well-known example is the COMPAS score, which has ignited controversy (e.g., Angwin et al., 2016;Dieterich et al., 2016;Rudin et al., 2020). We analyze a particular pre-trial risk assessment instrument used in Dane county, Wisconsin, that is different from the COMPAS score. This risk assessment instrument assigns integer classification scores to arrestees according to the risk that they will engage in risky behavior. It then aggregates these scores according to a deterministic function and provides an overall release recommendation to the judge. Our goal is to learn new algorithmic scoring and recommendation rules that can lead to better overall outcomes while retaining the transparency of the existing instrument. Importantly, we focus on changing the algorithmic policies, which we can intervene on, rather than judge's decisions, which we cannot. We apply the proposed methodology to the data from a unique field experiment on pre-trial risk assessment . Our analysis focuses on two key components of the instrument: (i) classifying the risk of a new violent criminal activity (NVCA) and (ii) recommending cash bail or a signature bond for release. We show how different restrictions on the outcome model, while maintaining the same policy class as the existing one, change the ability to learn new safe policies. We find that if the cost of an NVCA is sufficiently low, we can safely improve upon the existing risk assessment scoring rule by classifying arrestees as lower risk. However, when the cost of an NVCA is high, the resulting safe policy falls back on the existing scoring rule. For the overall recommendation, we find that noise is too large to improve upon the existing recommendation rules with a reasonable level of certainty, so the safe policy retains the status quo. Related work. Recently, there has been much interest in finding population optimal policies from randomized trials and observational studies. These methods typically use either inverse probability weighting (IPW) (e.g. Beygelzimer and Langford, 2009;Qian and Murphy, 2011;Zhao et al., 2012;Zhang et al., 2012;Swaminathan and Joachims, 2015;Kitagawa and Tetenov, 2018;Kallus, 2018) or augmented IPW (e.g. Dudik and Langford, 2011;Luedtke and Van Der Laan, 2016;Athey and Wager, 2021) to estimate and optimize the expected utility of a policy -or a convex relaxation of it -over a set of potential policies. All of these procedures rely on some form of overlap assumption, where the underlying policy that generated the data is randomized -or is stochastic in the case of observational studieswith non-zero probability of assigning any action to any individual. Kitagawa and Tetenov (2018) show that with known assignment probabilities the regret of the estimated policy relative to the oracle policy will decrease with the sample size at the optimal n −1/2 rate. For unknown assignment probabilities without unmeasured confounding, Athey and Wager (2021) show that the augmented IPW approach will achieve this optimal rate instead. Cui and Tchetgen Tchetgen (2021) also use a similar approach to learn optimal policies in instrumental variable settings. In contrast, our robust approach deals with deterministic policies where there is no overlap between the treated and untreated groups. In this setting, we cannot use (augmented) IPW-based approaches because the probability of observing an action is either zero or one. We could take a direct imputation approach that estimates a model for the expected potential outcomes under different actions and uses this model to extrapolate. However, there are many different models that fit the observable data equally well and so the expected potential outcome function is not uniquely point identified. Our proposal is a robust version of the direct imputation approach: we first partially identify the conditional expectation, and then use robust optimization to find the best policy under the worst-case model. Our approach builds on the literature about partial identification of treatment effects (Manski, 2005), which bounds the value of unidentifiable quantities using identifiable ones. We also rely on the robust optimization framework (see Bertsimas et al., 2011, for a review), which embeds the objective or constraints of an optimization problem into an uncertainty set, and then optimizes for the worst-case objective or constraints in that set. We use partial identification to create an uncertainty set for the objective. There are several recent applications of robust optimization to policy learning. Kallus and Zhou (2021) consider the IPW approach in the possible presence of unmeasured confounding. They use robust optimization to find the optimal policy across a partially identified set of assignment probabilities under the standard sensitivity analysis framework (Rosenbaum, 2002). In a different vein, Pu and Zhang (2021) study policy learning with instrumental variables. Using the partial identification bounds of Balke and Pearl (1994), they apply robust optimization to find an optimal policy. We use robust optimization in a similar way, but to account for the partial identification brought on by the lack of overlap. In a different setting, Gupta et al. (2020) use robust optimization to find optimal policies when extrapolating to populations different from a study population, without access to individual-level information (see also Mo et al. (2020)). In addition, Cui (2021) discusses various potential objective functions when there is partial identification, derived from classical ideas in decision theory. Finally, there has also been recent interest in learning policies in a multi-armed bandit setting where the policies are not randomized but the covariate distribution guarantees that each arm will be pulled without the need for explicit randomization (see, e.g. Kannan et al., 2018;Bastani et al., 2021;Raghavan et al., 2021). Paper outline. The paper proceeds as follows. Section 2 describes the pre-trial risk assessment instrument and the field experiment that motivate our methodology. Section 3 defines the population safe policy optimization problem and compares the resulting policy to the baseline and oracle polices. Section 4 shows how to compute an empirical safe policy from the observed data and analyzes its statistical properties. Section 5 presents some examples of the model and policy classes that can be used under our proposed framework. Section 6 extends the methodology to incorporate experimental data and human decisions. Section 7 applies the methodology to the pre-trial risk assessment problem. Section 8 concludes. Pre-trial Risk Assessment In this section, we briefly describe a particular pre-trial risk assessment instrument, called the Public Safety Assessment (PSA), used in Dane county, Wisconsin, that motivates our methodology. The PSA is an algorithmic recommendation that is designed to help judges make their pre-trial release decisions. After explaining how the PSA is constructed, we describe an original randomized experiment we conducted to evaluate the impact of the PSA on judges' pre-trial decisions. In Section 7, we apply the proposed methodology to the data from this experiment in order to learn a new, robust algorithmic recommendation to improve judicial decisions. Interested readers should consult Greiner et al. (2020) and for further details of the PSA and experiment. Our primary goal is to construct new algorithmic scoring and recommendation rules that can potentially lead to a higher overall expected utility than the status quo rules we discuss, while retaining the high level of transparency, interpretability, and robustness. In particular, we would like to develop robust algorithmic rules that are guaranteed to outperform the current rules with high probability. Crucially, we are concerned with the consequences of implementing these algorithmic policies on overall outcomes (see also . Although evaluating the classification accuracy of these algorithms also requires counterfactual analysis (see, e.g., Kleinberg et al., 2018;Coston et al., 2020), this is not our goal. Similarly, while there are many factors besides the risk assessment instruments that affect the judge's decision and the arrestee's behavior (e.g., the relationship between socioeconomic status and the ability to post bail), we focus on changing the existing algorithms rather than intervening on the other factors. The PSA-DMF system The goal of the PSA is to help judges decide, at first appearance hearings, which arrestees should be released pending disposition of any criminal charges. Because arrestees are presumed to be innocent, it is important to avoid unnecessary incarceration. The PSA consists of classification scores based on the risk that each arrestee will engage in three types of risky behavior: (i) failing to appear in court (FTA), (ii) committing a new criminal activity (NCA), and (iii) committing a new violent criminal activity (NVCA). By law, judges are required to balance between these risks and the cost of incarceration when making their pre-trial release decisions. The PSA consists of separate scores for FTA, NCA, and NVCA risks. These scores are deterministic functions of 9 risk factors. Importantly, the only demographic factor used is the age of an arrestee, and other characteristics such as gender and race are not used. The other risk factors include the current offense and pending charges as well as criminal history, which is based on prior convictions and prior FTA. Each of these scores is constructed by taking a linear combination of underlying risk factors and thresholding the integer-weighted sum. Indeed, for the sake of transparency, policy makers have made these weights and thresholds publicly available (see https://advancingpretrial.org/psa/factors). Table 1 shows the integer weights on these risk factors for the three scores. The FTA score has six levels and is based on four risk factors. The values range from 0 to 7, and the final FTA score is thresholded into values between 1 and 6 by assigning {0 → 1, 1 → 2, 2 → 3, (3, 4) → 4, (5, 6) → 5, 7 → 6}. The NCA score also has six levels, but is based on six risk factors and has a maximum value of 13 before being collapsed into six levels by assigning {0 → 1, (1, 2) → 2, (3, 4) → 3, (5, 6) → 4, (7, 8) → 5), (9, 10, 11, 12, 13) → 6}. Finally, the NVCA score, which will be the focus of our empirical analysis, is binary and is based on the weighted average of five different risk factors -whether the current offense is violent, the arrestee is 20 years old or younger, there is a pending charge at the time of arrest, and the number of prior violent and non-violent convictions. If the sum of the weights is greater than or equal to 4, the PSA returns an NVCA score of 1, flagging the arrestee as being at elevated risk of an NVCA. Otherwise the NVCA score is 0, and the arrestee is not flagged as being at elevated risk. the NCA score are both less than 5, then the recommendation is to only require a signature bond. Otherwise the recommendation is to require cash bail. 1 The experimental data To develop new algorithmic scoring and recommendation rules, we will use data from a field randomized controlled trial conducted in Dane county, Wisconsin. We will briefly describe the experiment here while deferring the details to Greiner et al. (2020) and . In this experiment, the PSA was computed for each first appearance hearing a single judge saw during the study period and was randomly either made available in its entirety to the judge or it was not made available at all. If a case is assigned to the treatment group, the judge received the information including the three PSA scores, the PSA-DMF recommendations, as well as all the risk factors that were used to construct them. For the control group, the judge did not receive the PSA scores and PSA-DMF recommendations but sometimes received some of the information constituting the risk factors. Thus, the treatment in this experiment was the provision of the PSA scores and PSA-DMF recommendations. For each case, we observe the three scores (FTA, NCA, NVCA) and the binary DMF recommendation (signature bond or cash bail), the underlying risk factors used to construct the scores, the binary decision by the judge (signature bond or cash bail), and three binary outcomes (FTA , No NVCA NVCA Total Signature Bond 1130 80 1410 Cash Bail 452 29 481 Total 1782 109 1891 Table 2: Number of cases where the judge assigned an arrestee a signature bond or cash bail, that eventually did or not result in an NVCA. NCA, and NVCA). We focus on first arrest cases in order to avoid spillover effects between cases. All told, there are 1891 cases, in 948 of which judges were given access to the PSA. Table 2 shows the case counts disaggregated by bail type and NVCA, the outcome we consider in Section 7. In 1410 of these cases, the judge assigned a signature bond and 109 cases had an NVCA. A slightly lower fraction of cases where the judge assigned cash bail resulted in an NVCA than cases where the judge assigned a signature bond (χ 2 test for independence p-value: 0.85). Crucially, each component of the PSA is deterministic and no aspect of it was randomized as part of the study. Since our goal is to learn a new, better recommendation system, the problem is the lack of overlap: the probability that any case would have had a different recommendation than it received is zero. Therefore, existing approaches to policy learning, which rely principally on the inverse of this probability, are not applicable for our setting. Instead, we must learn a robust policy through extrapolation. In the remainder of this paper, we will develop a methodological framework to learn new recommendation rules in the absence of this overlap in a robust way, ensuring that the new rules are no worse than the original recommendation, and potentially much better. The Population Safe Policy In order to separate out the key ideas, we will develop our optimal safe policy approach in two parts. In this section, after introducing the notation and describing our setup, we show how to construct a safe policy in the population, i.e., with an infinite number of samples. We will first describe the population optimization problem that constructs a safe policy. Then, we will give concrete examples to build intuition before describing our methodology in greater generality. Finally, we develop several theoretical properties of our approach. In Section 4, we will move from the population problem to the finite sample problem, and discuss constructing policies empirically. Notation and setup Suppose that we have a representative sample of n units independently drawn from a population P. For each individual unit i = 1, . . . , n, we observe a set of covariates X i ∈ X ⊆ R p and a binary outcome Y i ∈ {0, 1}. We consider a set of K possible actions, denoted by A with |A| = K, that can be taken for each unit. For each unit, action A i may affect its own outcome Y i but has no impact on its pre-treatment covariates X i . We assume no interference between units and consistency of treatment (Rubin, 1980). Then, we can write the potential outcome under each action A i = a as Y i (a) where a ∈ A (Neyman, 1923;Holland, 1986). We consider the setting where we know the baseline deterministic policyπ : X → A that generated the observed action A i =π(X i ) and the observed outcome Y i = Y i (A i ). Thus, we may write Y i = Y i (π(X i )) . This baseline policy partitions the covariate space, and we denote the set of covariates where the baseline action is a asX a ≡ {x ∈ X |π(x) = a}. Throughout this paper, when convenient, we will also refer to the baseline policy asπ(x | a) ≡ 1{π(x) = a}, the indicator of whether the baseline policy is equal to a. Because our setting implies that ({Y i (a)} a∈A , X i ) is independently and identically distributed, we will sometimes drop the i subscript to reduce notational clutter. Optimal policy learning Our primary goal is to find a new policy π : X → A, that has a high expected utility. 2 We will again use the notation π(a | X) ≡ 1{π(X) = a} for the policy being equal to action a given the covariates X. Letting u(y, a) denote the utility for outcome y under action a, the utility for action a with potential outcome Y (a) is given by, Note that this utility only takes into account the policy action and the outcome. In Section 6.2, we will show how to include the costs of human decisions into the utility function as well. The two key components of the utility are (i) the utility change between the two outcomes for action a, u(a) ≡ u(1, a) − u(0, a), which we assume is non-negative without loss of generality, and (ii) the utility for an outcome of zero with an action a, c(a) ≡ u(0, a); we will refer to this latter term as the "cost" because it denotes the utility under action a when the outcome event does not happen. We define the utility in terms of both the outcome y and the action a to capture the fact that some actions are costly; for example in Section 7 we will place a cost on triggering the NVCA flag and recommending cash bail. If the treatment has no cost, i.e., we simply set c(a) = 0. The value of policy π, or "welfare," is the expected utility under policy π across the population, V (π) = E a∈A π(a | X) {u(a)Y (a) + c(a)} .(1) Using the law of iterated expectation, we can write the value in the following form, V (π, m) = E a∈A E [π(a | X){u(a)Y (a) + c(a)} | X] = E a∈A π(a | X){u(a)m(a, X) + c(a)} ,(2) where m(a, x) ≡ E[Y (a) | X] represents the conditional expected potential outcome function. We include the dependence on the conditional expected potential outcome function m(a, x) to explicitly denote the value under different potential models in our development below. For two policies, we will define the regret of π 1 relative to π 2 as R(π 1 , π 2 , m) = V (π 2 , m) − V (π 1 , m). Ideally, we would like to find a policy π that has the highest value across a policy class Π. We can write a population optimal policy as the one that maximizes the value, π * ∈ argmax π∈Π V (π), or, equivalently, minimizes the regret relative toπ, π * ∈ argmin π∈Π {V (π) − V (π)}. Note that this optimal policy may not be unique. The policy class Π is an important object both in the theoretical analysis and in applications. We discuss the theoretical role of the policy class further in Sections 3.4 and 4.2, the important special case of policy classes with finite VC dimension in Section 5.2, and the substantive choices when applied to pre-trial risk assessments in Section 7. In order to find the optimal policy, we need to be able to point identify the value V (π) for all candidate policies π ∈ Π. Equation (2) shows us that in order to point identify the value we will need to point identify the conditional expectation m(a, x) for all actions a ∈ A and covariate values x ∈ X . If the baseline policyπ were stochastic, we could identify the conditional expectation via IPW (see, e.g. Zhao et al., 2012;Kitagawa and Tetenov, 2018). Alternatively, we could apply direct model-based imputation by using the conditional expectation of the observed outcomes E[Y | X = x, A = a]. However, in our setting where the baseline policyπ is a deterministic function of covariates, we cannot point identify the conditional expectation m(a, x). Therefore, we cannot point identify the value V (π) for all policies π ∈ Π. Robust optimization in the population In order to understand how lack of point identification affects our ability to find a new policy, we will separate the value of a policy into two components: one that is point identifiable and one that is not. We will then attempt to partially identify the latter term, and optimize for the worst-case value. To do this, we will use the fact that we can identify the conditional expectation of the potential outcome under the baseline policy as the conditional expectation of the observed outcome,m (x) ≡ m(π(x), x) = E[Y (π(X)) | X = x] = E[Y | X = x]. We can then write the value V (π) in terms of the identifiable partial model m(π(x), x) by using the observed outcome Y when our policy π agrees with the baseline policyπ, and the unidentifiable full model m(a, x) when π disagrees withπ, V (π, m) = E a∈A π(a | X) {u(a) [π(a | X)Y + {1 −π(a | X)} m(a, X)] + c(a)} .(3) Without further assumptions, we cannot point identify the value of the conditional expectation when a is different from the baseline policy and so we cannot identify V (π, m) for an arbitrary policy π. However, we can identify the value of the baseline policyπ as simply the utility using the observed policy values and outcomes, V (π) = E a∈Aπ (a | X){u(a)Y + c(a)} . Now, if we place restrictions on m(a, x), we can partially identify a range of potential values for a given policy π. Specifically, we encode the conditional expectation as a function m : A × X → [0, 1], and restrict it to be in a particular model class F. We then combine this with the fact that we have identified some function values, i.e., m(π(x), x) =m(x) = E[Y | X = x], to form a restricted model class: M = {f ∈ F | f (π(x), x) =m(x) ∀x ∈ X }.(4) We discuss particular choices of model class F and how to construct the associated restricted model class M in Section 5.1 below. This restricted model class combines the structural information from the underlying class F with the observable implications from the data to limit the possible values of the conditional expectation function m(a, x). With this, we take a maximin approach, finding a policy that maximizes the worst-case value across the set of potential models M for m(a, x). An equivalent approach is to minimize the worstcase regret relative to the baseline policyπ, because the value forπ is point identified. Therefore, the robust policy is a solution to, π inf ∈ argmax π∈Π min m∈M V (π, m) ⇐⇒ π inf ∈ argmin π∈Π max m∈M {V (π) − V (π, m)} .(5) We characterize the resulting optimal policy as "safe" because it first finds the worst-case value, V inf (π) ≡ min m∈M V (π, m), by minimizing over the set of allowable models M, and then finds the best policy in this worst-case setting. Since we are only optimizing over the unknown components, the worst-case value and the true value coincide for the baseline policy, i.e., V inf (π) = V (π). Therefore, so long as the baseline policyπ is in the policy class Π, the safe optimal policy π inf will be at least as good as the baseline. Furthermore, the baseline policy acts as a fallback option. If deviating from the baseline policy can lead to a worse outcome, the safe optimal policy will stick to the baseline. In this way, this robust solution only changes the baseline where there is sufficient evidence for an improved value. Finally, note that this is a conservative decision criterion. Other less conservative approaches include minimizing the regret relative to the best possible policy, or maximizing the maximum possible value; see Manski (2005) for a general discussion and Cui (2021) for other possible choices with partial identification. Two worked examples To give intuition on the proposed procedure, we will consider two special cases: (i) a single discrete covariate, and (ii) two binary covariates. Single discrete covariate Consider the case where we have a single discrete covariate with J levels x ∈ {0, . . . , J − 1}, which we will assume is drawn uniformly with probability 1/J for notational simplicity. Suppose we have a binary action, i.e., A = {0, 1}, and a binary outcome. Then, we can use the following vector representation; the conditional expectation function of the potential outcome given an action a ∈ A as m a ≡ (m a0 , . . . , m a,J−1 ) ∈ [0, 1] J , a policy as π ≡ (π 0 , . . . , π J−1 ) ∈ {0, 1} J , and the baseline policy asπ ≡ (π 0 , . . . ,π J−1 ) ∈ {0, 1} J . Finally, we can also denote the conditional expectation of the observed outcome as a vectorm ≡ (m 0 , . . . ,m J−1 ) ∈ [0, 1] J . Our first step is to constrain the model class, in this case restricting the vectors m 0 and m 1 to lie in a subset F ⊂ [0, 1] J × [0, 1] J . For illustration, here we focus on the restriction that nearby components m aj and m ak are close in value as well, and satisfy a Lipschitz property, F = {(m 0 , m 1 ) ∈ [0, 1] 2J | |m aj − m ak | ≤ λ a |k − j|}, where λ a is a constant. We can now combine this Lipschitz property with the constraint based on the observable outcomes: m aj =m j for all j withπ j = a. This yields that the restricted model class bounds each of the components of the model vectors, M = {(m 0 , m 1 ) ∈ [0, 1] 2×J | L aj ≤ m aj ≤ B aj }, where the lower and upper bounds are given by, L aj = max k∈Ka (m k − λ a |k − j|) and B aj = min k∈Ka (m k + λ a |k − j|)(6) with K a = {k |π k = a} being the set of indices where the baseline action is equal to a. For simplicity, assume that the utility change is constant, u(a) = 1, and the cost is zero, Solid points indicate the identifiable values of m aj , colored by the action (action 1 is purple; action 0 is green), while the hollow points represent the unidentifiable values. Each line shows a partial identification region, the range between the lower and upper bounds in Equation (6) above with the true Lipschitz constants λ 0 and λ 1 . c(a) = 0, for all actions a. Then, the robust maximin problem given in Equation (5) becomes: π inf = argmax π∈Π 1 a=0    1 J J−1 j=0 1{π j = a} (1{π j = a}m j + 1{π j = a}L aj )    .(7) Thus, the worst-case value uses the lower bound L aj in place of the unknown conditional expectation m aj . To further illustrate this case, consider the following numerical example where the action has a constant effect on the logit scale: m aj = logit −1 (s j +0.15×1{a = 1}) where s j is a covariate-specific intercept for each j. Suppose that the baseline action is given byπ j = 1{j ≥ 10}. Under this setting, the solid points in Figure 2 indicate the identifiable values of m aj , colored by the action (action 1 is green; action 0 is purple), while the hollow points represent the unidentifiable values. Each line shows a partial identification region, the range between the lower and upper bounds in Equation (6) Since the treatment effect is always positive, the oracle policy that optimizes the true value, π * = argmax π∈Π V (π, m), would assign action 1 everywhere. To construct the robust policy π inf via Equation (7), we assign action 1 wherever we can guarantee that action 1 has a higher expected outcome than action 0, or vice versa for action 0, no matter the true underlying model. These are the values where the solid points are entirely above (between red and black dashed lines) or below (right of the black dashed line) the partial identification lines. Wherever there is no such guarantee-where the lines contain the solid points (left of the red dashed line)-the maximin policy falls back to the baseline. The result is the robust policy that assigns action 1 for j ≥ 4 and action 0 otherwise. This safe policy improves welfare by 2.5% relative to the baseline, compared to the optimal rule which improves welfare by 4.2%. Two binary covariates Next, we consider a case with two binary covariates x = (x 1 , x 2 ) ∈ {0, 1} 2 -again drawn uniformly for simplicity -where the utility changes are constant, i.e., u(0) = u(1) = 1, the cost is zero, i.e., c(0) = c(1) = 0, and a baseline policy assigns action 1 when both covariate values are 1, π(x) = 1{x 1 x 2 = 1}. Then, we can represent any conditional expectation function as a linear model with an interaction term, m(a, x) = β a0 + β a1 x 1 + β a2 x 2 + β a12 x 1 x 2 . Denote the conditional expectation of the observed outcome asm x 1 x 2 ≡ m(π(x), x). With this setup, the coefficients must satisfy the following four linear constraints: m 00 = β 00 ,m 10 = β 00 + β 01 , m 01 = β 00 + β 02 ,m 11 = β 10 + β 11 + β 12 + β 112 . Without any further assumptions, we can only identify β 00 , β 01 , and β 02 . Therefore, we cannot learn any policy other than the baseline policy without restrictions on the unidentifiable coefficients. It turns out, however, that if we are willing to assume that the conditional expectation is additive, i.e., β a12 = 0 for both actions, we can make progress. Under this additional assumption, we can represent these models as 3 dimensional vectors, F = {m(a, x) = β a0 + β a1 x 1 + β a2 x 2 | (β a0 , β a1 , β a2 ) ∈ R 3 }. Then, the restricted set M consists of vectors (β 00 , β 01 , β 02 , β 10 , β 11 , β 12 ) ∈ R 6 that satisfy the four linear constraints in Equation (8). Using linear algebra tools, we can write the restricted set in terms of the observable model valuesm x 1 x 2 and the null space of the four linear constraints. Specifically, the coefficients under action 0 are uniquely identified: β 00 =m 00 , β 01 =m 10 −m 00 , and β 02 =m 01 −m 00 . In contrast, the coefficients for action 1 are only restricted to sum tom 11 . By computing the null space of this single constraint, the restricted model set can be written as, M = β 00 =m 00 , β 01 =m 10 −m 00 , β 02 =m 01 −m 00 , β 10 = − 11 4 (b 1 + b 2 ), β 11 = 15 4 b 1 − b 2 , β 12 = 15 4 b 2 − b 1 +m 11 , β 112 = 0 | (b 1 , b 2 ) ∈ R 2 . Optimizing over the two unknown parameters, (b 1 , b 2 ), we find the following worst-case value: V inf (π) = 1 x 1 =0 1 x 2 =0 1{π(x 1 , x 2 ) = 0} {m 00 + (m 10 −m 00 )x 1 + (m 01 −m 00 )x 2 } + 1{π(x 1 , x 2 ) = 1} {m 11 x 1 x 2 − I(x 1 x 2 = 0)} , where I(x ∈ S) is equal to ∞ if x ∈ S and is equal to 0 otherwise. When finding the safe policy, this constrains the policy so that π(x) = 1 for all x 1 = 0 or x 2 = 0. Thus, the maximin robust optimization problem (5) is given by, max π∈Π 1 x 1 =0 1 x 2 =0 1{π(x 1 , x 2 ) = 0} {m 00 + (m 10 −m 00 )x 1 + (m 01 −m 00 )x 2 } + 1{π(x 1 , x 2 ) = 1}m 11 x 1 x 2 subject to π(0, 0) = π(1, 0) = π(0, 1) = 0. Note that the only free parameter in the robust optimization problem is the policy action at x 1 = x 2 = 1; the three other policy values are constrained to be zero. Therefore, with a fully flexible policy class, the robust policy is constrained to agree with the baseline policyπ for all x 1 x 2 = 0 but can still disagree for x 1 = x 2 = 1 by extrapolating with the model. When the candidate policy gives an action of zero, π(1, 1) = 0, the worst-case value V inf (π) will use the point-identified control model, extrapolating to the unobserved case as m(0, (1, 1)) =m 10 +m 01 −m 00 . This allows us to learn a safe policy π inf that can disagree withπ and will assign action 0 rather than action 1 if m 11 <m 01 +m 10 −m 00 . Regret relative to the baseline and oracle policies We now derive the theoretical properties of the proposed population safe policy π inf . To simplify the statements of the results, we will assume that the utility gain across different actions is constant and, without loss of generality, is positive, u(a) ≡ u(a, 1) − u(a, 0) = u > 0 for all actions a ∈ A. First, the proposed policy is shown to be "safe" in the sense that it never performs worse than the baseline policyπ. This conservative principle is the key benefit of the robust optimization approach. The following proposition shows that as long as the baseline policy is in our policy class Π, and the underlying model lies in the restricted model class M, the value of the population safe policy is never less than that of the baseline policy. Proposition 1 (Population safety). Let π inf be a solution to Equation (5). If m ∈ M, andπ ∈ Π, then R(π inf ,π, m) ≤ 0. However, this guarantee of safety comes at a cost. In particular, the population safe policy may perform much worse than the infeasible, oracle optimal population policy, π * ∈ argmax π∈Π V (π). Although we never know the oracle policy, we can characterize the optimality gap, V (π * ) − V (π inf ), which is the regret or the difference in values between the proposed robust policy and the oracle. To do this, we consider the "size" of the restricted model class M. Specifically, we define the width of some function class F in the direction of function g as: W F (g) = sup f ∈F E a∈A f (a, X)g(a, X) − inf f ∈F E a∈A f (a, X)g(a, X) .(9) This represents the difference between the maximum and minimum cross-moment of a function g and all possible functions f ∈ F. We then define the overall size of the model class, W F , as the maximal width over all possible policies: W F = sup g∈G W F (g). where G = {g : sup x∈X a∈A g (a, x) ≤ 1, g (a, x) ≥ 0} is the space of all possible policies. The size of the restricted model class W M denotes the amount of uncertainty due to partial identification. If we can point identify the conditional expectation function, then the size will be zero; larger partial identification sets will have a larger size. The following theorem shows that the optimality gap, scaled by the utility gain u, is bounded by the size of the model class. In other words, the cost of robustness is directly controlled by the amount of uncertainty in the restricted model class M. Theorem 1 (Population optimality gap). Let π inf be a solution to Equation (5). If m(a, x) ∈ M, the regret of π inf relative to the optimal policy π * ∈ argmax π∈Π V (π) is R(π inf , π * , m) u ≤ sup π∈Π W M (π(1 −π)) ≤ W M . In the limiting case where we can fully identify the conditional expectation m(a, x) ≡ E[Y (a) | X = x] , M contains only one element. Then, the size W M will be zero and so the regret will be zero. This means that the solution to the robust optimization problem in Equation (5) for few action-covariate pairs, then the size of the restricted model class M will be large, there will be a greater potential for sub-optimality due to lack of identification, and the regret of the safe policy π inf relative to the infeasible optimal policy π * could be large. Finally, note that the size of the restricted model class W M gives the worst-case bound, but the potentially tighter bound depends on the policy class as well. The Empirical Safe Policy In practice, we do not have access to an infinite amount of data, and so we cannot compute the population safe policy. Here, we show how to learn an empirical safe policy from observed data of finite sample size. From the population problem to the empirical problem Suppose we have n independently and identically distributed data points {X i ,π(X i ), Y i (π(X i ))} n i=1 . From this sample we wish to find a robust policy empirically. To do so, we begin with a sample analog to the value function in Equation (3) above, V (π, m) = 1 n n i=1 a∈A π(a | X i ) {u(a) [π(a | X i )Y i + {1 −π(a | X i )} m(a, X i )] + c(a)} .(10) With this, we could find the worst-case sample value across all models in the restricted model class M from Equation (4). However, we do not have access to the true conditional expectatioñ m(x) = E[Y (π(X))] and so cannot compute the true restricted model class. One possible way to address this is to obtain an estimator of the conditional expectation function,m(x), and use the estimate in place of the true values. However, this does not take into account the estimation uncertainty, and could lead to a policy that improperly deviates from the baseline due to noise. This approach will have no guarantee that the new policy is at least as good as the baseline without access to many samples: it would rely on convergence of the model form(x), which may be slow. Instead, we construct a larger, empirical model class M n (α), based on the observed data, that contains the true restricted model class with probability at least 1 − α, P M ⊆ M n (α) ≥ 1 − α.(11) Then we construct our empirical policies by finding the worst-case in-sample value then maximizing this objective across policies π π ∈ argmax π∈ΠV inf (π) ≡ argmax π∈Π min m∈ Mn(α)V (π, m).(12) We discuss concrete approaches to constructing the empirical model class and solving this optimization problem in Section 5.1. In general, the empirical model class will be larger than the true model class and so a policy derived from it will be more conservative. Finite sample statistical properties What are the statistical properties of our empirical safe policyπ in finite samples? We will first establish that the proposed policy has an approximate safety guarantee: with probability approximately 1 − α we can guarantee that it is at least as good as the baseline, up to sampling error and the complexity of the policy class. We then characterize the empirical optimality gap and show that it can be bounded using the complexity of the policy class as well as the size of the empirical restricted model class. For simplicity, we will consider the special case of a binary action set A = {0, 1}. We use the population Rademacher complexity to measure the complexity of the policy class: R n (Π) ≡ E X,ε sup π∈Π 1 n n i=1 ε i π(X i ) , where ε i 's are i.i.d. Rademacher random variables, i.e., Pr(ε i = 1) = Pr(ε i = −1) = 1/2, and the expectation is taken over both the Rademacher variables ε i and the covariates X i . The Rademacher complexity is the average maximum correlation between the policy values and random noise, and so measures the ability of the policy class Π to overfit. First, we establish a statistical safety guarantee analogous to Proposition 1. Theorem 2 (Statistical safety). Letπ be a solution to Equation (12). Given the baseline policỹ π ∈ Π and the true conditional expectation m(a, x) ∈ M, for any 0 < δ ≤ e −1 , the regret ofπ relative to the baselineπ is, R(π,π, m) ≤ 8CR n (Π) + 14C 1 n log 1 δ , with probability at least 1 − α − δ, where C = max y∈{0,1},a∈{0,1} |u(y, a)|. Theorem 2 shows that the regret for the empirical safe policy versus the baseline policy is controlled by the Rademacher complexity of the policy class Π, and an error term due to sampling variability that decreases at a rate of n −1/2 . The complexity of the policy class Π determines the quality of the safety guarantee for any level α. If the policy class is simple, then the bound will quickly go towards zero for any level α; if it is complex, then we will require larger samples to ensure that the safety guarantee is meaningful, regardless of the level α. Importantly, by taking a conservative approach using the larger model class M n (α), the estimation error for the conditional expectationm(x) −m(x) does not directly enter into the bound. However, if we cannot estimatem(x) well, the empirical restricted model class M n (α) will be large, which will affect how well the empirical safe policy compares to the oracle policy. To quantify this, we will again rely on a notion of the size of the empirical restricted model class M n (α). In this setting, however, we will use an empirical width, W F (g) = sup f ∈F 1 n n i=1 a∈A f (a, X i )g(a, X i ) − inf f ∈F 1 n n i=1 a∈A f (a, X i )g(a, X i ).(13) Similarly to above, we define the empirical size of F, W F , as the maximal empirical width over potential models, W F = sup g∈G W F (g). Theorem 3 (Empirical optimality gap). Letπ be a solution to Equation (12) and assume that the utility gains are equal to each other, u(1) = u(0) = u > 0. If the true conditional expectation m ∈ M, then for any 0 < δ ≤ e −1 , the regret ofπ relative to the optimal policy π * is R(π, π * , m) ≤ 2C sup π∈Π W Mn(α) (π(1 −π)) + 8CR n (Π) + 14C 1 n log 1 δ ≤ 2C W Mn(α) + 8CR n (Π) + 14C 1 n log 1 δ , with probability at least 1 − α − δ, where C = max y∈{0,1},a∈{0,1} |u(y, a)|. Comparing to Theorem 1, we see that the size -now the empirical version -plays an important role in bounding the gap between the empirical safe policy and the optimal policy. In addition, the Rademacher complexity again appears: policy classes that are more liable to overfit can have a larger optimality gap. For many standard policy classes, we can expect the Rademacher complexity to decrease to zero as the sample size increases, while the empirical size of the restricted model class may not. Furthermore, there is a tradeoff between finding a safe policy with a higher probabilitysetting the level 1 − α to be high -and finding a policy that is closer to optimal. By setting 1 − α to be high, the width of M n (α) will increase, and the potential optimality gap will be large. This tradeoff is similar to the tradeoff between having a low type I error rate (α low) and high power ( W Mn(α) low) in hypothesis testing. Similarly, if we cannot estimate the conditional expectation function well, then the size W Mn (α) can be large even with low probability guarantees 1 − α. Model and Policy Classes The two important components when constructing safe policies are the assumptions we place on the outcome model -the model class F -and the class of candidate policies that we consider Π. We will now consider several representative cases of model classes, show how to construct the restricted model classes, and apply the theoretical results above. Then, we will further discuss the role of the policy class, considering the special cases of the results for policy classes with finite VC dimension. Point-wise bounded restricted model classes We M = {f : A × X → R | B (a, x) ≤ f (a, x) ≤ B u (a, x)}.(14) We will also create an empirical restricted model class M n (α) that satisfies the probability guarantee in Equation (11) W M ≤ E max a∈A {B u (a, X) − B l (a, X)} and W Mn(α) ≤ 1 n n i=1 max a∈A ( B αu (a, X i ) − B α (a, X i )).(15) In Appendix A.1 we use these bounds to specialize Theorems 1 and 3 to this case. The point-wise bound also allows us to solve for the worst-case population and empirical values V inf (π) andV inf (π) by finding the minimal value for each action-covariate pair (see Pu and Zhang, 2021). Finding the empirical safe policy by solving Equation (12) is equivalent to solving an empirical welfare maximization problem using a quasi-outcome that is equal to the observed outcome when the action agrees with the baseline policy, and is equal to either the upper or lower bound when it disagrees, Υ i (a) =π(a | X i )Y i + {1 −π(a | X i )} 1{u(a) ≥ 0} B α (a, X i ) + 1{u(a) ≤ 0} B αu (a, X i ) . With this, Equation (12) specializes tô π ∈ argmax π∈Π 1 n n i=1 a∈A π(a | X i ) u(a) Υ i (a) + c(a) .(16) In effect, for an action a where the baseline actionπ(x) is equal to a, the minimal value uses the outcomes directly. In the counterfactual case where the baseline action is different from a, the value will use either the upper or lower bound of the outcome model, depending on the sign of the utility gain. Using bounds in place of outcomes in this way is similar to the approach of Pu and Zhang (2021) in instrumental variable settings. Since the optimization problem (16) is not convex, it is not straightforward to solve exactly. As many have noted (e.g. Zhao et al., 2012;Zhang et al., 2012;Kitagawa and Tetenov, 2018), this optimization problem can be written as a weighted classification problem and approximately solved via a convex relaxation with surrogate losses. An alternative approach is to solve the problem in Equation (16) directly. In our empirical studies, we consider thresholded linear policy classes that mirror the NVCA and DMF rules we discuss in Section 2; these turn Equation (16) ., F = {f | 0 ≤ f (a, x) ≤ 1 ∀a ∈ A, x ∈ X }. Then the restricted model class M = {f ∈ F | f (a, x) =m(x) for a withπ(x) = a} provides no additional information when the policy π disagrees with the baseline policyπ and the upper and lower bounds in Equation (14) are B u (a, x) =π(a | x)m(x) + {1 −π(a | x)} and B (a, x) =π(a | x)m(x), respectively. In the absence of any additional information, the worst case conditional expectation is 0 or 1 (depending on the sign of the utility gain) whenever it is not point identified. The size of this model class is then W M = 1, the maximum possible value. Note, however, that it is still possible to learn a new policy if the utility gain for an action is never worth the cost, i.e., |u(a)| < |c(a)|. To construct the larger, empirical model class M n (α), we begin with a simultaneous 1 − α confidence interval for the conditional expectation functionm(x), with lower and upper bounds C α (x) = [ C α (x), C αu (x)] such that P m(x) ∈ C α (x) ∀ x ≥ 1 − α.(17)(x), i.e. B αu (a, x) =π(a | x) C αu (x) + {1 −π(a | x)} and B (a, x) =π(a | x) C α (x). Example 2 (Lipschitz Functions). Suppose that the covariate space X has a norm · , and that m(a, ·) is a λ a -Lipschitz function, F = {f : A × X → R | |f (a, x) − f (a, x )| ≤ λ a x − x }. Taking the greatest lower bound and least upper bound implied by this model class leads to lower and upper bounds, B (a, x) = sup x ∈Xa {m(x ) − λ a x − x }, and B u (a, x) = inf x ∈Xa {m(x ) + λ a x − x }, where recall thatX a = {x ∈ X |π(x) = a} is the set of covariates where the baseline policy gives action a. The further we extrapolate from the area where the baseline actionπ(x) = a, the larger the value of x−x will be and so there will be more ignorance about the values of the function. So the size of M will depend on the expected distance to the boundary between baseline actions and the value of the Lipschitz constant. If most individuals are close the boundary, or the Lipschitz constant is small, M will be small and the safe policy will be close to optimal. Conversely, a large number of individuals far away from the boundary or a large Lipschitz constant will increase the potential for suboptimality. To construct the empirical version, we again use a simultaneous confidence band C α (x) satisfying Equation (17). Then the lower and upper bounds use the lower and upper con- fidence limits in place of the function values, B α (a, X) = sup x ∈Xa C α (x ) − λ a X − x and B αu (a, X) = inf x ∈Xa C αu (x ) − λ a X − x . Example 3 (Generalized linear models). Consider a model class that is a generalized linear model in a set of basis functions φ : ) for all x and a such thatπ(x) = a. Let β * ∈ R p be the minimum norm solution and let D ∈ R d×d ⊥ be an orthonormal basis for the null space N = {b ∈ R d | b · φ(a, x) = 0 ∀π(x) = a}. Then we can re-write the restricted model class as A × X → R d , with monotonic link function h : [0, 1] → R, F = {f (a, x) = h −1 (b · φ(a, x))}. The restricted model class is the set of coefficients b that satisfy h(m(x)) = b · φ(a, xM = {f (a, x) = h −1 ((β * + Db N ) · φ(a, x)) | b N ∈ R d ⊥ }. The free parameters in this model class are represented as the vector b N ∈ R d ⊥ . Finding the worst-case value will involve a non-linear optimization over b N , which may result in optimization failure. Rather than taking this approach, we will consider a larger class M ≡ {f | B (a, x) ≤ f (a, x) ≤ B u (a, x)} that contains the restricted model class M. Note that using this larger model class will be conservative. Since m(a, x) is between 0 or 1, we can use this bound when φ(a, x) is in the null space N to get upper and lower bounds B (a, x) = h −1 (β * · φ(a, x))1{D φ(a, x) = 0} B u (a, x) = h −1 (β * · φ(a, x))1{D φ(a, x) = 0} + 1{D φ(a, x) = 0}. The worst-case value uses β * a to extrapolate wherever we can point identify m(a, x). It resorts to one of the bounds for units assigned to action a whenπ(x) = a and φ(a, x i ) is not orthogonal to the null space. The size of the model class is the percentage of units that are in the null space, W M = 1 − Pr D φ(a, X) = 0 ∀ a ∈ A . The fewer units in the null space, the smaller the size and the closer the safe policy is to optimal. To construct the empirical model class we again begin with a simultaneous confidence band, this time for the minimum norm prediction β * · φ(a, x) ∈ [ C α (a, x), C αu (a, x)] via the Working-Hotelling-Scheffé procedure (Wynn and Bloomfield, 1971;Ruczinski, 2002), β * · φ(a, x) ∈β * · φ(a, x) ± rF α,(r,n−r)σ 2 φ(x, a) (Φ Φ) † φ(a, x), whereβ * is the least squares estimate of the minimum norm solution,σ 2 is the estimate of the variance from the MSE, Φ = [φ(π(x i ), x i )] n i=1 ∈ R n×d is the design matrix, r is the rank of Φ, F α,(r,n−r) is the 1 − α quantile of an F distribution with r and n − r degrees of freedom, and A † denotes the pseudo-inverse of a matrix A. This gives lower and upper bounds, B α (a, x) = h −1 ( C α (a, x))1{D φ(a, x) = 0}, B αu (a, x) = h −1 ( C αu (a, x))1{D φ(a, x) = 0} + 1{D φ(a, x) = 0}. Policy classes with finite VC dimension The choice of model class F corresponds to the substantive assumptions we place on the outcomes in order to extrapolate and find new policies. The choice of policy class Π is equally important: it determines the type of policies we consider. An extremely flexible policy class with no restrictions will result in the highest possible welfare, but such a policy is undesirable for two reasons. First, they are all but inscrutable by both those designing the algorithms and those subject to the algorithm's actions (see Murdoch et al., 2019, for discussion on interpretability issues). Second, policies that are too flexible will have a high complexity, and so the bounds on the regret of the empirical safe policy-versus either the baseline policy or the infeasible optimal policy-will be too large to give meaningful gaurantess. One way to characterize the complexity of the policy class Π is via its VC-dimension: the largest integer m for which there exists some points x 1 , . . . , x m ∈ X that are shattered by Π, i.e. where the policy values π(x 1 ), . . . , π(x m ) can take on all 2 m possible combinations (for more on VC dimension and uniform laws, see Wainwright, 2019, §4). Examples of policy classes with finite VC dimension include linear policies, Π lin = {π(x) = 1{θ · x ≥ θ 0 } | (θ 0 , θ) ∈ R d+1 } with a VC dimension of d + 1, and depth L decision trees with VC dimension on the order of 2 L log d (Athey and Wager, 2021). The VC dimension gives an upper bound on the Rademacher complexity: for a function class G with finite VC dimension ν < ∞, the Rademacher complexity is bounded by, R n (G) ≤ c ν/n, for some universal constant c (Wainwright, 2019, §5.3). In Appendix A.3, we use this bound to specialize the results in Section 4.2, finding that the higher the VC dimension, the more liable a policy is to overfit to the noisy data, and the more samples we will need to ensure that the regret bound is low. For a policy class with finite VC dimension, which will be typical in applied policy settings, the rate of convergence will still be O p n −1/2 . However, for a policy class with VC dimension growing with the sample size, ν n β , the rate of growth must be less than √ n in order for the regret to converge to a value less than or equal to zero. See Athey and Wager (2021) for further discussion. Extensions Motivated by our application introduced in Section 2, we consider two important extensions of the methodology proposed above. First, we consider the scenario under which the data come from a randomized experiment, where a deterministic policy of interest is compared to a status quo without such a policy. Second, we consider a human-in-the-loop setting, in which an algorithmic policy recommendation is deterministic, but the final decision is made by a human decision-maker in an unknown way. In this case, we must adapt the procedure to account for the fact that the policy may affect the final decisions, but does not determine them, implying that actions only incur costs through the final decisions. For notational simplicity, we will again assume, throughout this section, that the utility gain is constant across all actions and is denoted by u. Experiments evaluating a deterministic policy In many cases, a single deterministic policy is compared to the status quo of no such policy via a randomized trial for program evaluation. In our empirical study, the existing policy was compared to a "null" policy where no algorithmic recommendations were provided. The goal of such a trial is typically to evaluate whether one should adopt the algorithmic policy. We now show that one can use the proposed methodology to safely learn a new, and possibly better, policy even in this setting. In particular, we can weaken the restrictions of the underlying model class by placing assumptions on treatment effects rather than the expected potential outcomes. We focus here on comparing a baseline policyπ to a null policy that assigns no action, which we denote as Ø(x) = Ø, and has potential outcome Y (Ø). In this setup, let Z i ∈ {0, 1} be a treatment assignment indicator where Z i = 0 if no policy is enacted (i.e., null policy), and Z i = 1 if the policy follows the baseline policyπ. Let e(x) = P (Z = 1 | X = x) be the probability of assigning the treatment condition for an individual with covariates x. Since it is an experiment, this probability is known. Rather than minimize the regret relative to the baseline policyπ as in Equation (5) π inf ∈ argmin π∈Π max f ∈T −E a∈A π(a, X) (c(a) + u(a) [π(a | X)Γ(Z, X, Y ) + (1 −π(a | X)) f (a, X)]) worst-case regret R inf (π,Ø) .(18) We can similarly construct the empirical analog by creating a larger empirical model class T n (α) as in Section 4. From the perspective of finding a new, empirical safe policy π, the key benefit of experimentally comparing two deterministic policies in this way is that the primary unidentified object is the CATE, τ (a, x) = E[Y (a) − Y (Ø) | X = x] , rather than the conditional expected potential outcome, m(a, x) = E[Y (a) | X = x] . Treatment effect heterogeneity is often considered to be significantly simpler than heterogeneity in outcomes (see, e.g., Künzel et al., 2019;Hahn et al., 2020;Nie and Wager, 2021). Therefore, we may consider a smaller model class for the treatment effects T than the class for the conditional expected outcomes M, leading to better guarantees on the optimality gap between the robust and optimal policies in Theorems 1 and 3. Algorithm-assisted human decision-making In many cases, an algorithmic policy is not the final arbiter of decisions. Instead, there is often a "human-in-the-loop" framework, where an algorithmic policy provides recommendations to a human that makes an ultimate decision. Our pre-trial risk assessment application is an example of such algorithm-assisted human decision-making . D i = D(π(X i )) whereas the observed outcome is Y i = Y i (π(X i )) = Y i (D i (π(X i )),π(X i )). Finally, we write the utility for outcome y under decision d as u(y, d). With this setup, the value for a policy π is: V (π) = E a∈A π(a | x) 1 d=0 [u(1, d)Y (d, a) + u(0, d)(1 − Y (d, a))] 1{D(a) = d} . We make the simplifying assumption that the utility gain is constant across decisions, u(1, d) − u(0, d) = u for d ∈ {0, 1}, index the utility for y = 0 and d = 0 as u(0, 0) = 0, and denote the added cost of taking decision 1 as c = u(0, 1) − u(0, 0). Now we can write the value by marginalizing over the potential decisions, yielding, V (π) = E a∈A π(a | x) (uY (a) + cD(a)) .(19) Comparing Equation (19) to the value in Equation (1) when actions are taken directly, we see that the key difference is the inclusion of the potential decision D(a) in determining the cost of an action. Rather than directly assigning a cost to an action a, there is an indirect cost associated with the eventual decision D(a) that action a induces in the decision maker. Therefore, lack of identification of the expected potential decision under an action given the covariates, d(a, x), also must enter the robust procedure. We can treat lack of identification of the potential decisions in a manner parallel to the outcomes. max π∈Π E a∈A π(a | x)π(a | x)uY + min f ∈M E a∈A π(a | x){1 −π(a | x)}uf (a, X) +E a∈A π(a | x)π(a | x)cD + min g∈D E a∈A π(a | x){1 −π(a | x)}cg(a, X) .(20) By allowing for actions to affect decisions through the decision maker rather than directly, the costs of actions are not fully identified. Therefore, we now find the worst-case expected outcome and decision when determining the worst case value in Equation (20). In essence, we solve the inner optimization twice: once over outcomes for the restricted outcome model class M and once over decisions for the restricted decision model class D. From here, we can follow the development in the previous sections. We create empirical restricted model classes for the outcome and decision functions, M n (α/2) and D n (α/2) using a Figure 3: Learning a new NVCA flag threshold. The x-axis shows the total number of NVCA points, x nvca and the y-axis shows the CATE τ (a, x nvca ) of providing the NVCA. The points and thin lines around them are point estimates an a simultaneous 80% confidence interval for the partial CATE function τ (π(x nvca ), x nvca ) when the NVCA flag is not triggered (π(x nvca ) = 0, in orange) and is triggered (π(x nvca ) = 1, in blue). The thick solid lines represent the partial identification set for the unobservable components of the CATE, τ (1, x nvca ) for x nvca < 4 and τ (0, x nvca ) for x nvca ≥ 4. The purple dashed line represents the baseline policy of triggering the flag when x nvca ≥ 4, and the pink dashed line is the empirical safe policy that only triggers the flag when x nvca ≥ 6. Bonferonni correction so that P (M ∈ M n (α/2), D ∈ D n (α/2)) ≥ 1 − α. Then, we solve the empirical analog to Equation ( Empirical Analysis of the Pre-trial Risk Assessment We now apply the proposed methodology to the PSA-DMF system and the randomized controlled trial described in Section 2. We will focus on learning robust rules for two aspects of the PSA-DMF system: the way in which the binary new violent criminal activity (NVCA) flag is constructed and the overall DMF matrix recommending a signature bond or cash bail. For both settings, we use the incidence of an NVCA as the outcome. In Appendix C, we also inspect the behavior of this methodology via a Monte Carlo simulation study. Learning a new NVCA flag threshold We begin by learning a new threshold for the NVCA flag. Let x nvca ∈ {0, 6} be the total number of NVCA points for an arrestee, computed using the point system in Table 1. Recall that the baseline NVCA algorithm is to trigger the flag if the number of points is greater than or equal to 4, i.e. π(x nvca ) = 1{x nvca ≥ 4}. Our goal in this subsection is to find the optimal worst-case policy across the set of threshold policies, Π thresh = {π(x) = 1{x nvca ≥ η} | η ∈ {0, . . . , 7}} ,(21) where our binary outcome of interest is no NVCA occurring. Here, we will keep the baseline weighting on arrestee risk factors and only change the threshold α; in Section 7.2 below we will turn to changing the underlying weighting scheme. Having chosen the policy class Π thresh , we need to restrict the conditional average treatment effect on no NVCA occurring, τ (a, x nvca ). Here, we impose a Lipschitz constraint on the CATE, following Example 2. We also restrict the treatment effects to be bounded between −1 and 1, since the outcome is binary. Note that this is not the tightest possible bound, since the restriction is that 0 ≤ m(Ø, x) + τ (a, x) ≤ 1. To incorporate the uncertainty in estimating m(Ø, x) in finite samples we could use analogous techniques to those in Section 4; we leave this to future work. For this model class, we need to specify the Lipschitz constants for the CATE when the flag is and is not triggered (λ 1 for τ (1, x nvca )) and λ 0 for τ (0, x nvca ), respectively). We adapt a heuristic suggestion from Imbens and Wager (2019) for model classes with a bounded second derivative to the Lipschitz case. We estimate the CATE function by taking the difference in NVCA rates with and without provision of the PSA at each level of x nvca . Then, we choose the Lipschitz constants to be three times the largest consecutive difference between CATE estimates, yielding λ 0 = 0.047 and λ 1 = 1.3, though other choices are possible. In Appendix C, we inspect the impact of this multiplicative factor on the performance of the empirical safe policy through a simulation study. To construct the empirical restricted model class, we set the level to 1 − α = 0.8, allowing some tolerance for statistical uncertainty, and construct a simultaneous 80% confidence interval for the CATE via the Working-Hotelling-Scheffé procedure, as in Example 3, and use the upper and lower confidence limits. The next important consideration in constructing a new policy is the form of the utility function. Recall that in our parameterization we must define the difference in utilities when there is and is to an increase in fiscal costs -e.g., housing, security, and transportation -directly incident on the jurisdiction. Furthermore, there are potential socioeconomic costs to the defendant and their community. To represent these costs, we will place zero cost on not triggering the NVCA flag, c(0) = 0, and a cost of 1 on triggering the flag, c(1) = −1. We then assign an equal utility gain from avoiding an NVCA, u(1) = u(0) = u (equivalently, the cost of an NVCA is −u). This yields a utility function of the form u(y, a) = u × y − a. We will consider how increasing the cost of an NVCA relative to the cost of triggering the flag changes the policies we learn. Figure 3 shows the results. It represents the empirical restricted model class by showing point estimates and simultaneous 80% confidence intervals for the observable component of the CATE function τ (π(x nvca ), x nvca ) and the partial identification set for the unobservable component. Notice that there is substantially more information when extrapolating the CATE for the case that the NVCA flag is not triggered. As we can see in Figure 3, this is because the point estimates do not vary much with the total number of NVCA points, and so we use a small Lipschitz constant. On the other hand, there is essentially no information when extrapolating in the other direction. Because there is a large jump in the point estimates between x nvca = 5 and x nvca = 6, we use a large Lipschitz constant. This means that the empirical restricted model class puts essentially no restrictions on τ (1, x nvca ) for x nvca < 4: the treatment effects can be anywhere between −1 and 1. While the treatment effects are ambiguous, we can still learn a new threshold because we assign a cost to triggering the NVCA flag. To compute the empirical robust policyπ we can solve Equation (16) via an exhaustive search, since the policy class Π thresh only has eight elements. Solving this for different costs of an NVCA, we find that when the cost is between 1 and 11 times the cost of triggering the flag -that is, 1 < u < 11 -the new robust policy is to set the threshold to η = 6, only triggering the flag for arrestees with the observed maximum of 6 total NVCA points. This is a much more lenient policy, reducing the number of arrestees that are flagged as at risk of an NVCA by 95%. Conversely, when the cost of an NVCA is 11 times the cost of triggering the flag or more, the ambiguity about treatment effects leads the empirical safe policyπ to revert to the status quo, keeping the threshold at η = 4. In Figure D.2 of the Appendix, we show how the maximin threshold changes as we vary the confidence level 1 − α and the multiplicative factor on the estimated Lipschitz constant; overall the relationship between the threshold and the cost of an NVCA is robust to the particular choices. Learning a new NVCA flag point system We now turn to constructing a new, robust NVCA flag rule by changing the weights applied to the risk factors in Table 1. We use the same set of covariates as the original NVCA rule, represented as 7 binary covariates X ∈ {0, 1} 7 : binary indicators for current violent offense, current violent offense and 20 years old or younger, pending charge at time of arrest, prior conviction (felony or misdemeanor), 1 prior violent conviction, 2 prior violent convictions, and 3 or more prior violent convictions. The status quo system uses a vector of weightsθ on the risk factors x and triggers the NVCA flag if the sum of the weights is greater than or equal to four, i.e.,π(x) = 1 7 j=1θ j x j ≥ 4 . To simplify comparisons to the status quo, we will constrain the threshold to remain at 4, though we could additionally include the threshold as a decision variable. Given this, a key consideration is the form of the policy class Π that we will use. We ensure that the new rule has the same structure as the status quo rule, making it easier to adapt the existing system and use institutional knowledge in the jurisdiction. Specifically, we use the following policy class that thresholds an integer-weighted average of the 7 binary covariates, Π int =    π(x) = 1    7 j=1 θ j x j ≥ 4    θ j ∈ Z    .(22) This policy class therefore includes the original NVCA flag rule as a special case (see Table 1). In addition, to understand any differences between policies in this class, we can simply compare the vector of weights (θ 1 , . . . , θ 7 ) ∈ Z 7 . Empirical size of potential model classes We begin by defining several possible models for the outcomes. We consider two models of the conditional expected potential incidence of an NVCA: an additive outcome model class M add , and an outcome model with separate additive terms and common two-way interactions M two , M add =    m(a, x) = 7 j=1 β aj x j    , M two =    m(a, x) = 7 j=1 β aj x j + k<j β jk x j x k    . We additionally restrict the outcome models to be bounded between zero and one. For these two model classes, we only use those cases where the NVCA flag was shown to the judge. Because cases were randomly assigned to the control group for which the judge has no access to the PSA-DMF system, we can alternatively follow the development in Section 6.1 and use the structure of the experiment to place restrictions on the effect of assigning an NVCA flag of 1, τ (a, x) = m(a, x) − m(Ø, x). An advantage is that we can use both the cases that did and did not have access to the PSA. We consider two different treatment effect models: an additive effect model T add and a second order effect model T two , T add =    τ (a, x) = 7 j=1 τ aj x j    , T two =    τ (a, x) = 7 j=1 k<j τ ajk x j x k    . Here, we again restrict the treatment effects to be bounded between −1 and 1. These four model classes each lead to different restrictions, and ultimately affect what policies we can learn from this experiment. This is partly because even with infinite data the models may not be identifiable. But, it is also because with finite data there is a different amount of uncertainty on each model class. Figure 4 depicts this information by showing how the empirical size of the model class (vertical axis), defined in Equation (13), changes with the the desired confidence level 1 − α (horizontal axis). Recall that the size when the confidence level is zero serves as a proxy for The purple line shows the overall empirical size: the expected maximum across both levels. The empirical size when the confidence level is zero serves as a proxy of the population size. For the additive models, the overall size and the size when the NVCA flag is triggered are the same and fully overlap. the population size of the model class defined in Equation (9). For the additive outcome and effect models, the size is zero when the confidence level is zero, implying that these models are identifiable. This is due to the structure of the NVCA flag rule: for given values of the covariates, it is possible to observe cases with the flag set to zero or one. When taking into account the statistical uncertainty, the widths increase. This is primarily due to greater uncertainty for cases with a flag of 1, which account for only 16% of the cases. The size for flag 1 (orange) determines the overall size (purple) and so the lines overlap. The two model classes also differ in how they vary with the confidence level; there is more uncertainty in the treatment effect and so the size of the additive effect model is larger at every value of the confidence level than the additive outcome model. However, the additive treatment effect assumption is significantly weaker than the additive outcome assumption. Relative to the two additive models, the second order models have significantly more uncertainty reflected in larger empirical sizes. This is primarily due to the lack of identification; even without accounting for statistical uncertainty, the widths are already over 75% of their maximum values. Indeed, there are many combinations of the binary covariates where we cannot observe cases that have an NVCA flag of both zero and one. For example, there are only 28 cases (1.5% of the total) where we can identify the model for both values. Moreover, all of these cases have the same characteristics: the arrestee has committed a violent offense, is over 21 years old, has a pending charge, and a prior felony or misdemeanor conviction. The empirical size of the model class gives an indication of the potential optimality gap between the empirical safe policy we learn and the true optimal policy. Unfortunately, this statistic does not describe how flexible the class is and whether we should expect it to contain the true relationship between the potential outcomes and the covariates, since it only describes how much of the model is left unidentified. These considerations are crucial to guarantee robustness. The additive outcome model M add has the smallest size overall, but it also places the strongest restrictions. It is contained by the second order interaction outcome model M two , but this model has a much larger size. Choosing between these two models represents a tradeoff: the larger model M two is more liable to contain the truth and so we can guarantee a greater degree of robustness, but choosing the smaller model M add can yield a better policy if it does indeed contain the truth. In contrast, the additive treatment effect class T add both is weaker than the second order outcome model M two and has a smaller empirical size. Therefore, we can make weaker assumptions without limiting our ability to find a good policy as much. This is the key benefit of incorporating the control information in this study. Note that the second order treatment effect model, which makes the weakest assumptions, is too large to provide any guarantees on the regret relative to the optimal policy. Therefore, for the reminder of this section, we focus on the additive treatment effect model, which allows us to include the control group information and make weaker assumptions than the additive outcome model. Figure 5 shows how the robust policy, which solves the optimization problem given in Equation (16) with the additive treatment effect class T add , compares to the original rule as we vary the cost of an NVCA −u (horizontal axis) and the confidence level 1−α (vertical axis). With the integer-weighted policy class Π int , the optimization problem is an integer program; we solve this with the Gurobi solver (Gurobi Optimization, LLC, 2021). The left panel shows the percent of recommendations changed from the original ones, while the right panel displays the improvement in the worst-case value over the original NVCA flag rule. Across every confidence level, the robust policy differs less and less from the original rule as the cost of an NVCA relative to the cost of triggering the flag increases. For a given cost of an NVCA, policies at lower confidence levels are more aggressive in deviating from the original rule, prioritizing a potentially lower regret relative to the optimal policy at the expense of guaranteeing that the new policy is no worse than the original rule. Figure 6 inspects the integer-weights on the risk factors for the robust policy at the 1 − α = 80% level, as the cost of an NVCA increases. In the limiting setting where an NVCA is given the same cost as triggering the NVCA flag, the robust policy differs substantially from the original rule, In light of the empirical sizes displayed in Figure 4, this behavior is primarily due to increased uncertainty in the effect of triggering the NVCA flag relative to not. When the cost of an NVCA is low, the robust policy will not trigger the NVCA flag for cases that triggered the original flag; even with the increased uncertainty, it is preferable in these cases to not trigger the flag. Conversely, when the cost of an NVCA is high the increased uncertainty in the effect of triggering the flag makes the robust policy default to the original rule. In this case, the high costs of an NVCA make any change in the policy too risky to act upon. Constructing a robust NVCA flag rule Incorporating judge's decisions So far, we have only considered the outcomes of triggering the NVCA flag and have assigned costs directly to the flag. However, the PSA serves as a recommendation to the presiding judge who is the ultimate decision maker. Following the discussion in Section 6.2 we can incorporate this into the construction of the robust policy. Rather than place a cost on triggering the NVCA flag, we use the judge's binary decision of whether to assign a signature bond or cash bail, and place a cost of −1 to assigning cash bail. Unlike the cost on the NVCA flag above, this allows us to address the Difference from original weights Figure 6: Change in the robust NVCA flag weights θ in Equation (22) as the cost of an NVCA increases from 100% to 1,000% of the cost of triggering the NVCA flag, at a confidence level of 1 − α = 80%. costs of detention directly. As discussed above, the cost on the judge's decision to assign cash bail includes the fiscal and socioeconomic costs, indexed to be −1. Following the same analysis as above, we can find robust policies that take the decisions into account for increasing costs of an NVCA relative to assigning cash bail, at various confidence levels. However, for the additive and second order effect models we find policies that differ from the original rule only when we do not take the statistical uncertainty into account -with confidence level 1 − α = 0 -and have no finite sample guarantee that the new policy is not worse than the existing rule. In this case, the policy is extremely aggressive, responding to noise in the treatment effects. Otherwise, we cannot find a new policy that safely improves on the original rule. This is primarily because the overall effects of the PSA on both the judge's decisions and defendants behavior are small ; therefore there is too much uncertainty to ensure that a new policy would reliably improve upon the existing rule. DMF Matrix Another key component of the the PSA-DMF framework is the overall recommendation given by the DMF matrix (see Figure 1). This aggregates the FTA and NCA scores into a single recommendation on assigning a signature bond versus cash bail. We now consider constructing a new DMF matrix based on the FTA and NCA scores, which we combine into a vector (x fta , x nca ) ∈ {1, . . . , 6} 2 . We restrict our analysis to the 1,544 cases that used the DMF matrix rather than those that cash bail was automatically assigned. Here, we again focus on the class of additive treatment effect models τ add (a, x) = τ fta (a, x fta ) + To search for new policies, we consider a policy class that is monotonically increasing in both covariates. This monotonic policy class contains the DMF matrix rule as a special case and incorporates the notion that no case should move from a cash bail to a signature bond recommendation if the risk of an FTA or NCA increases. Formally, this monotonic policy class is given by, Π mono = {π(x) ∈ [0, 1] | π(x fta , x fta ) ≤ π(x fta + 1, x fta ) and π(x fta , x fta ) ≤ π(x fta , x fta + 1)} . As in Section 7.2, we consider parameterizing the utility in terms of a fixed cost of 1 for recommending cash bail -reflecting the fiscal and socioeconomic costs of detention -and varying the cost of an NVCA. Figure 8 shows the robust policies learned for the varying cost of an NVCA and different confidence levels. In the limiting case where the cost of an NVCA is equal to recommending cash bail, the safe policy is to assign a signature bond for all but the most extreme cases. This is because even if assigning a signature bond is guaranteed to lead to an NVCA, the utility is equal to assigning cash bail and not leading to an NVCA. 4 In the other limiting case, we eschew finite sample statistical guarantees and set the confidence level to 0. That is, we ignore any statistical uncertainty in estimating the conditional expectation function, and instead use the point estimate directly. When doing this, increasing the cost of an NVCA relative to recommending cash bail leads to more of the intermediate area with FTA scores between 2 and 4 and NCA scores between 3 and 4 being assigned cash bail, until the cost is high enough that the entire identified area is assigned cash bail. However, this does not hold up to even the slightest of statistical guarantees due to the uncertainty in the treatment effects. Because the effects of assigning cash bail are both small and uncertain, the robust policy reduces to the existing DMF matrix. Discussion In recent years, algorithmic and data-driven policies and recommendations have become an integral part of our society. Being motivated in part by this transformative change, the academic literature on optimal policy learning has flourished. The increasing availability of granular data about individuals at scale means that the opportunities to put these new methodologies in practice will only grow more in the future. One important challenge when learning and implementing a new policy in the real world is to ensure that it does not perform worse than the existing policy. This safety feature is critical, especially if relevant decisions are consequential. In this paper, we develop a robust optimization approach to deriving an optimal policy that has a statistical safety guarantee. This allows policy makers to limit the probability that a new policy achieves a worse outcome than the existing policy. The development of a safe policy is essential particularly when it is impossible to conduct a randomized experiment for ethical and logistical reasons. Observational studies bring additional uncertainty due to the lack of identification. Moreover, for transparency and interpretability, most policies are based on known, deterministic rules, making it difficult to learn a new policy using standard methods such as inverse probability-of-treatment weighting. We develop a methodology that addresses these challenges and apply it to a risk assessment instrument in the criminal justice system. Our analysis suggests an opportunity for improving the existing scoring rules. An important aspect of this methodology is that it depends on the design of the baseline policy. The structure of the baseline will determine what is identifiable and what is not. For example, in the PSA-DMF system we explore here, we were able to fully identify additive models because the NVCA scoring rule incorporates several risk factors and no single risk factor guarantees that the flag will fire. On the other hand, we could not fully identify an additive model for the DMF matrix because if either the NCA or FTA scores are large enough, the recommendation is always cash bail. This logic extends to higher dimensions. For example, we could not identify many terms in the interactive effect model because most combinations of two risk factors result in an NVCA flag. we used the sharper bound that 0 ≤ m(Ø, x) + τ (a, x) ≤ 1 -and properly accounted for boundary effects -the safe policy would never assign cash bail. Therefore, this framework is likely to be most successful for policies based on several covariates that are aggregated to a single score before thresholding. There are several avenues for future research. The first set of questions relates to the implemen- Third, there are many ways in which optimal algorithmic recommendations may differ when considering long term societal outcomes rather than short term ones. For example, pre-trial detention brought on by a recommendation may in turn alter the long term behavior and welfare of an arrestee. Understanding how to design algorithms when they affect decisions that mediate future outcomes is key to ensuring that recommendations do not take a myopic view. One potential way to incorporate long term outcomes may be with the use of surrogate measures. More work needs to be done on the question of how to incorporate surrogate measures into our policy learning framework while providing a safety guarantee. Finally, within the robust optimization framework, the notion of "safety" can be considerably expanded. In this paper, we consider policies to be safe if they do not lead to worse outcomes on average; however, this does not guarantee that outcomes are not worse for subgroups. A more equitable notion of safety would be to ensure safety across subgroups, though doing so may reduce the ability to improve overall welfare. Similarly, the robust optimization framework can be made to incorporate statistical fairness criteria -a different form of safety. Such constraints may be themselves uncertain or only partially identified, and so a robust approach would account for this as well . Taking the greatest lower bound and least upper bound for each component function, the overall lower and upper bounds are, B (a, X) = j sup x ∈Xa m j (x j ) − λ a |X j − x j | + j<k sup x ∈Xa m jk (x j , x k ) − λ a X (j,k) − x (j,k) + . . . B u (a, X) = j inf x ∈Xa m j (x j ) + λ a |X j − x j | + j<k inf x ∈Xa m jk (x j , x k ) + λ a X (j,k) − x (j,k) + . . . ,(23) where x (j,k) is the subvector of components j and k of x. Unlike in Example 2, this extrapolates covariate by covariate, finding the tightest bounds for each component. For instance, for a firstorder additive model, the level of extrapolation depends on the distance in each covariate |x j − x j | separately. To construct the empirical model class for the class of additive models, we use a 1−α confidence interval that holds simultaneously over all values of x and for all components, i.e., m j (x j ) ∈ C (j) α (x j ), m jk (x j , x k ) ∈ C (j,k) α (x j , x k ), . . . , ∀ j = 1, . . . , d, k < j, . . . , with probability at least 1 − α. Analogous to the Lipschitz case in Example 2 above, we can then construct the lower and upper bounds using the lower and upper bounds of the confidence intervals, B α (a, X) = j sup x ∈Xa C (j) α (x j ) − λ|X j − x j | + j<k sup x ∈Xa C (j,k) α (x j , x k ) − λ X (j,k) − x (j,k) + . . . B αu (a, X) = j inf x ∈Xa C (j) αu (x j ) + λ|X j − x j | + j<k inf x ∈Xa C (j,k) αu (x j , x k ) + λ X (j,k) − x (j,k) + . . . . A.3 Regret for policy classes with finite VC dimension Corollary 3 (Statistical safety with finite VC dimension policy class). If the policy class Π has finite VC dimension ν < ∞, under the conditions in Theorem 2 and for any 0 < δ ≤ e −1 , the regret ofπ relative to the baselineπ is R(π,π, m) ≤ C √ n 4c √ ν + 14 log 1 δ , with probability at least 1 − α − δ, where C = max y∈{0,1},a∈{0,1} |u(y, a)|, and c is a universal constant. Corollary 4 (Empirical optimality grap for bounded model class and finite VC dimension policy class). If the policy class Π has finite VC dimension ν < ∞, under the conditions in Theorem 3 and for any 0 < δ ≤ e −1 , the regret ofπ relative to the optimal policy π * is R(π, π * ) ≤ 2C n n i=1 max a∈A { B αu (a, X i ) − B α (a, X i )} + C √ n 4c √ ν + 14 log 1 δ , with probability at least 1 − α − δ, where C = max y∈{0,1},a∈{0,1} |u(y, a)|, and c is a universal constant. B Proofs and Derivations Proof of Proposition 1. V (π) = V inf (π) ≤ V inf (π inf ) ≤ V (π inf ). Proof of Theorem 1. Since V inf (π) ≤ V (π) for all policies π, the regret is bounded by R(π inf , π * ) = V (π * ) − V (π inf ) ≤ V (π * ) − V inf (π inf ) = V inf (π * ) − V inf (π inf ) + a∈A u(a)E [π * (a, X){1 −π(a | X)}m(a, X)] − J M (π * π). Now since π inf is a minimizer of V inf (π), V inf (π * ) − V inf (π inf ) ≤ 0. This yields R(π inf , π * ) ≤ a∈A u(a)E [π * (a | X){1 −π(a | X)}m(a, X)] − inf = |u|W M (π * (1 −π)) ≤ |u| sup π∈Π W M (π(1 −π)) . Now notice that a∈A π(a | x){1 −π(a | x)} ≤ 1 and π(a | x){1 −π(a | x)} ≥ 0 for any π ∈ Π, so R(π inf , π * ) ≤ |u|W M . Then with a binary action set A = {0, 1}, for any δ > 0, sup π∈Π |V (π) − V (π)| ≤ 4CR n (Π) + 4C √ n + δ, with probability at least 1 − exp − nδ 2 2C 2 , where C = max y∈{0,1},a∈{0,1} |u(y, a)|. Proof of Lemma 1. First, with binary actions, the empirical value iŝ V (π) = 1 n n i=1 u(0) {(1 −π(X i ))Y i +π(X i )m(0, X i )} + c(0) + u(1)π(X i ) {π(X i )(Y i − m(0, X i )) + (1 −π(X i ))(m(1, X i ) − Y i )} + π(X i )(c(1) − c(0)). Define the function class with functions f (x, y) in F = {u(0) [(1 −π(X i ))Y i +π(X i )m(0, X i )] + c(0) +u(1)π(X i ) [(π(X i )(Y i − m(0, X i )) + (1 −π(X i ))(m(1, X i ) − Y i )] + π(X i )(c(1) − c(0)) | π ∈ Π} . Now notice that sup π∈Π |V (π) − V (π)| = sup f ∈F 1 n n i=1 f (X i , Y i ) − E [f (X, Y )] . The class F is uniformly bounded by the maximum absolute utility C = max y∈{0,1},a∈{0,1} |u(y, a)|, so by Theorem 4.5 in Wainwright (2019) sup f ∈F 1 n n i=1 f (X i , Y i ) − E [f (X, Y )] ≤ 2R n (F) + δ, with probability at least 1 − exp − nδ 2 2C 2 . Finally, notice that the Rademacher complexity for F is bounded by R n (F) ≤ E X,Y,ε 1 n n i=1 {u(0)((1 −π(X i ))Y i +π(X i )m(0, X i )) + c(0)} ε i + sup π∈Π E X,Y,ε 1 n n i=1 [u(1) {π(X i )(Y i − m(0, X i )) + (1 −π(X i ))(m(1, X i ) − Y i )} + c(1) − c(0)] π(X i )ε i ≤ E ε u(0) + c(0) n n i=1 ε i + sup π∈Π E X,Y,ε 1 n n i=1 (u(1) + c(1) − c(0))π(X i )ε i ≤ 2C n E ε n i=1 ε i + sup π∈Π 2C E X,Y,ε 1 n n i=1 π(X i )ε i ≤ 2C √ n + 2CR n (Π). Proof of Theorem 2. The regret is R(π,π) = V (π) − V (π) = V (π) −V (π) +V (π) −V (π) +V (π) − V (π) ≤ 2 sup π∈Π |V (π) − V (π)| +V (π) −V (π) Now if M ⊂ M n (α), thenV (π) ≥V inf (π), andV (π) =V inf (π). Also, note thatπ maximizeŝ V inf (π). Combining this, we can see that if M ⊂ M n (α), then V (π) −V (π) ≤V inf (π) −V inf (π) ≤ 0. So, with probability at least 1 − α (the probability that M ⊂ M n (α)), R(π,π) ≤ 2 sup π∈Π |V (π) − V (π)|. Now, using Lemma 1 and the union bound, we have that R(π,π) ≤ 8CR n (Π) + 8C √ n + 2t, with probability at least 1 − α − exp − nt 2 8C 2 . Choosing t = C 8 n log 1 δ and noting that 8 + 2 8 log 1 δ ≤ (8 + 2 √ 8) log 1 δ ≤ 14 log 1 δ gives the result. Proof of Theorem 3. The regret ofπ relative to π * is R(π, π * ) = V (π * ) − V (π) = V (π * ) −V (π * ) +V (π * ) −V (π) +V (π) − V (π) ≤ sup π∈Π 2|V (π) − V (π)| +V (π * ) −V (π). We have bounded the first term in Lemma 1, we now turn to the second term. V (π * ) −V (π) =V inf (π * ) −Ĵ Mn(α) (π * π) + 1 n a∈A u(a)π * (a, X i ){1 −π(a | X i )}m(a, X i ) −V (π) SinceV (π) ≥V inf (π) ≥V inf (π * ), conditioned on the event M ∈ M n (α) and with probability at least 1 − α, we have, V (π * ) −V (π) |u| ≤ sup f ∈ Mn(α) 1 n a∈A π * (a, X i ){1 −π(a | X i )}f (a, X i ) − inf f ∈ Mn(α) 1 n a∈A π * (a, X i ){1 −π(a | X i )}f (a, X i ) = W Mn(α) (π * (1 −π)) ≤ max π∈Π W Mn(α) (π(1 −π)) . Now since max x∈X a∈A π(a | x){1 −π(a | x)} ≤ 1 for any π ∈ Π, we get that with probability at least 1 − α,V (π * ) −V (π) ≤ |u| W Mn(α) . Combined with Lemma 1 and the union bound this gives that R(π, π * ) ≤ |u| W Mn(α) + 8CR n (Π) + 8C √ n + 2t, with probability at least 1 − α − exp − nt 2 8C 2 . Choosing t = C 8 n log 1 δ and noting that 8 + 2 8 log 1 δ ≤ (8 + 2 √ 8) log 1 δ ≤ 14 log 1 δ and u ≤ 2C gives the result. Proof of Corollary 2. The empirical width of M n (α) = { B α (a, x) ≤ f (a, x) ≤ B αu (a, X)} in the direction of g is W Mn(α) (g) = 1 n The left panel shows the difference in the expected utility between the empirical safe policyπ, and the baseline policỹ π, normalized by the regret of the baseline relative to the oracle, i.e. V (π)−V (π) V (π * )−V (π) . The right panel shows the regret of the safe policy relative to the oracle, scaled by the regret of the baseline relative to the oracle, i.e. V (π * )−V (π) V (π * )−V (π) . By Hölder's inequality, C A Simulation Study We have a single discrete covariate with 10 levels, x ∈ {0, . . . , 9}, and a binary action so that the action set is A = {0, 1}. We choose a baseline policyπ = 1{x ≥ 5}, and set the utility gain to be u(0) = u(1) = 10 and the costs to be c(0) = 0, c(1) = −1, so that action 0 is costless and action 1 costs one tenth of the potential utility gain. For each simulation we draw n i.i.d. samples X 1 , . . . , X n uniformly on {0, . . . , 9}. Then we draw a smooth model for the expected control potential outcome m(0, x) ≡ E[Y (0) | X = x] via random Fourier features. We draw three random vectors: ω ∈ R 100 with i.i.d. standard normal elements; b ∈ R 100 with i.i.d. components drawn uniformly on [0, 2π]; and β ∈ R 100 with i.i.d. standard normal elements. Then we set m(0, x) = logit −1 2 100 β · cos ω x 9 + b , where the cosine operates element-wise. See Rahimi and Recht (2008) for more discussion on random features. For the potential outcome under treatment, m(1, x) = E[Y (1) | X = x], we add a linear treatment effect on the logit scale: m(1, x) = logit −1 logit (m(0, x)) + 1 2 x − 9 2 − 8 10 . We then generate the potential outcomes Y i (0), Y i (1) as independent Bernoulli draws with probabilities m(0, X i ) and m(1, X i ), respectively. With each simulation draw, we consider finding a safe empirical policy by solving Equation (16) under a Lipschitz restriction on the model as in Example 2 and with the threshold policy class Π thresh in Equation (21). Note that the true model is in fact much smoother than Lipschitz; here we consider using the looser assumption. Following our empirical analysis in Section 7.1, we take the average outcome at each value of x, and compute the largest difference in consecutive averages as pilot estimates for the Lipschitz constants λ 0 and λ 1 . We then solve Equation (16) using 1 2 , 1, and 2 times these pilot estimates as the Lipschitz constants, and setting the significance level to 0, 80% and 95%. We additionally compute the oracle threshold policy that uses the true model values m(0, x) and m(1, x). We do this for sample sizes n ∈ (500, 1000,1500,2000). Figure C.1 shows how the empirical safe policyπ compares to both the baseline policyπ and the oracle policy π * in terms of expected utility. First, we see that on average, the empirical safe policy improves over the baseline, no matter the confidence level and the choice of Lipschitz constant. This improvement is larger the less conservative we are, e.g. by choosing a lower confidence level or a smaller Lipschitz constant. Furthermore, as the sample size increases, the utility of the empirical safe policy also increases due to a lower degree of statistical uncertainty. We find similar behavior when comparing to the oracle policy. Less conservative choices lead to lower regret, and the regret decreases with the sample size. Importantly, the regret does not decrease to zero; even when removing all statistical uncertainty the safe policy can still be suboptimal due to the lack of identification. (16) for the NVCA flag threshold rule as the cost of an NVCA increases from 100% to 1,000% of the cost of triggering the NVCA flag, and (a) the confidence level varies between 0% and 100% and (b) the multiplicative factor on the estimated Lipschitz constant varies from 1 to 10. Y (a)u(1, a) + {1 − Y (a)}u(0, a) = {u(1, a) − u(0, a)}Y (a) + u(0, a). Figure 2 : 2The robust policy with a single discrete variable, binary action set A = {0, 1}, constant utility change u(0) = u(1) = 1 and zero cost c(0) = c(1) = 0. The black dashed line indicates the decision boundary for the baseline policyπ; the red dashed line is the boundary for the robust policy π inf . above with the true Lipschitz constants λ 0 = 0.0091 and λ 1 = 0.00912. Note that the intervals are not necessarily symmetric around the true values of the conditional expected outcome, as the lower and upper bounds represent the entire range of possibilities. now give several examples of model classes F and the restricted model classes induced by the data M. For all of the model classes we consider, the restricted model class will be a set of functions that are upper and lower bounded point-wise by two bounding functions, . This similarly results in the form of a point-wise lower and upper bound on the conditional expectation function, with lower and upper bounds B α (a, x) and B αu (a, x), respectively. These point-wise bounds yield a closed form bound on the size of the restricted model class M and the empirical size of the empirical restricted model class M n (α) as the expected maximum difference between the bounds: See Srinivas et al. (2010); Chowdhury and Gopalan (2017); Fiedler et al. (2021) for examples on constructing such simultaneous bounds via kernel methods in statistical control settings. With this confidence band, we can use the upper and lower bounds of the confidence band in place of the true conditional expectationm , we will minimize regret relative to the null policy Ø. Defining the conditional average treatment effect (CATE) of action a relative to no action Ø, τ (a, x) = m(a, x) − m(Ø, x), we can now write the regret of a new policy π relative to the null policy Ø as,R(π, Ø) = −E a∈A π(a | x){u(Y (a) − Y (Ø)) + c(a)} = −E a∈A π(a | x){uτ (a, x) + c(a)} .Now, followingKitagawa and Tetenov (2018), we can identify the the CATE function for the baseline policyπ(x) using the transformed outcome Γ(Z, X, Y ) = Y {Z − e(X)}/{e(X)(1 − e(X))}, which equals the CATE in expectation, i.e., τ (π(x), x) = E[Γ(Z, X, Y ) | X = x]. With this, we follow the development in Section 3.2, with the transformed outcome Γ replacing the outcome Y and a restricted model class for the treatment effects T = {f ∈ F | f (π(x), x) = τ (π(x), x)} replacing the model class for the outcomes M. Specifically, we decompose the regret into an identifiable component and an unidentifiable component, and consider the worst-case regret across all treatment effects in T , giving the population robust optimization problem, To formalize this setting, let D i (a) ∈ {0, 1} be the potential (binary) decision for individual i under action (or an algorithmic recommendation in our application) a ∈ A, and Y i (d, a) ∈ {0, 1} be the potential (binary) outcome for individual i under decision d ∈ {0, 1} and action a ∈ A. We denote the expected potential decision conditional on covariates x as d(a, x) = E[D(a) | X = x].Further, we denote the potential outcome under action a as Y i (a) = Y i (D i (a), a) and again represent the conditional expectation by m(a, x) = E[Y i (a) = 1 | X = x]. Then, the observed decision is given by Denoting the conditional expected observed decision as d(π(x), x) = E[D | X = x], we can posit a model class for the decisions F and created the restricted model class D = {f ∈ F | f (π(x), x) = d(π(x), x)}.3 We can now create a population safe policy by maximizing the worst case value across the model classes for both the outcomes M and the decisions D, 20). Finally, we can incorporate experimental evidence as in Section 6.1. In this case, the conditional expected potential decision d(a, x) and outcome m(a, x)and their model classes -are replaced with the conditional average treatment effect on the decision E[D(a) − D(Ø) | X = x] and on the outcome τ (a, x). not a new violent criminal activity, u(a) = u(1, a) − u(0, a), for both actions a ∈ {0, 1}. Similarly, we need to define the baseline "cost" of action a, c(a) = u(0, a). While the marginal monetary cost of triggering the NVCA flag versus not can be considered approximately zero given the initial fixed cost of collecting the data for the PSA, there are other costs to consider. For instance, to the extent that triggering the NVCA flag increases the likelihood of pre-trial detention, it will lead Figure 4 : 4The empirical size (as a percentage of its maximum value) of four different model classes versus the confidence level 1 − α. The green and orange lines separate the size into the component stemming from the region where the NVCA flag is zero and one (a = 0 and a = 1 separately). Figure 5 : 5The difference between the robust policy and the original NVCA flag rule as the cost of an NVCA increases from 100% to 1,000% of the cost of triggering the NVCA flag, and the confidence level varies between 0% and 100%. The shading in the left panel shows the percentage of recommendations that differs between the two policies; in the right panel it shows how much the robust policy improves the worst case value relative to the original rule. In all cases, the robust policy changes the flag from a "Yes" to a "No."placing no weight on prior violent convictions. Once the cost is at least five times the cost of triggering the flag, the robust policy reduces to the original rule. For intermediate values, the robust policy places less -but not zero -weight on the number of prior violent convictions than the original rule. Figure 7 : 7Upper bound on the treatment effects under the additive model τ add (a, x) for FTA and NCA scores. Values below and to the right of the dashed white line are areas where cash bail is recommended, and the bounds are on the effect of recommending a signature bond. Values above and to the left are areas where a signature bond is recommended, and the bounds are on the effect of recommending cash bail. Figure 8 : 8Robust monotone policy recommendations under an additive model for the treatment effects, as the cost of an NVCA and the confidence level vary. The dashed black line indicates the original decision boundary between a signature bond (above and to the left) and cash bail (below and to the right). NCA score of 3, there is a significant amount of uncertainty due to small sample sizes. Indeed, there are only 3 cases where cash bail is recommended that have an NCA score of 3 and 2 cases that have an FTA score of 2. a)E [π * (a | X){1 −π(a | X)}f (a, X)] ≤ sup f ∈M a∈A u(a)E [π * (a | X){1 −π(a | X)}f (a, X)] − inf f ∈M a∈A u(a)E [π * (a | X){1 −π(a | X)}f (a, X)] Proof of Corollary 1 . 1The width of M = {B (a, x) ≤ f (a, x) ≤ B u (a, X)} in the direction of g is W M (g) = E a∈A g(a, X){B u (a, X) − B (a, X)} . x){B u (a, X) − B (a, X)} ≤ E max a∈A B u (a, X) − B (a, X)Lemma 1. Define the empirical value of a policy π aŝV (π) = 1 n a∈A π(a | X i ) {u(a) [π(a | X i )Y i + {1 −π(a | X i )}m(a, X i )] + c(a)} . a, X){ B αu (a, X) − B α (a, X)}. Figure C. 1 : 1Monte Carlo simulation results as the sample size n increases, varying the multiplicative factor on the empirical Lipschitz constant and the significance level 1 − α. W{ Mna, x){ B αu (a, X) − B α (a, B αu (a, X) − B α (a, X)}. Figure D. 2 : 2New safe threshold values solving Equation Table 1: Weights placed on risk factors to construct the failure to appear (FTA), new criminal activity (NCA), and new violent criminal activity (NVCA) scores. The sum of the weights are then thresholded into six levels for the FTA and NCA scores and a binary "Yes"/"No" for the NVCA score.Risk factor FTA NCA NVCA Current violent offense > 20 years old 2 ≤ 20 years old 3 Pending charge at time of arrest 1 3 1 Prior conviction misdemeanor or felony 1 1 1 misdemeanor and felony 1 2 1 Prior violent conviction 1 or 2 1 1 3 or more 2 2 Prior sentence to incarceration 2 Prior FTA in past 2 years only 1 2 1 2 or more 4 2 Prior FTA older than 2 years 1 Age 22 years or younger 2 These three PSA scores are then combined into two recommendations for the judge: whether to require a signature bond for release or to require some level of cash bail, and what, if any, monitoring conditions to place on release. In this paper, we analyze the dichotomized release recommendation, i.e., signature bond versus cash bail, and ignore recommendations about monitoring conditions. Both of these recommendations are constructed via the so-called "Decision Making Framework" (DMF), which is a deterministic function of the PSA scores. For our analysis, we exclude the cases where the current charge is one of several serious violent offenses, the defendant was extradited, or the NVCA score is 1, because the DMF automatically recommends cash bail for these cases. We do not consider altering this aspect of the DMF.For the remaining cases, the FTA and NCA risk scores are combined via a decision matrix.Figure 1: Decision Making Framework (DMF) matrix for cases where the current charge is not a serious violent offense, the NVCA flag is not triggered, and the defendant was not extradited. If the FTA score and the NCA score are both less than 5, then the recommendation is to only require a signature bond. Otherwise the recommendation is to require cash bail. Unshaded areas indicate impossible combinations of FTA and NCA scores.Figure 1 shows a simplified version of the DMF matrix highlighting where the recommendation is to require a signature bond (beige) versus cash bail (orange) for release. If the FTA score and 2 4 6 2 4 6 NCA Score FTA Score PSA Recommendation Signature Bond Cash Bail will have the same value as the oracle, reducing to the standard case where we can point identify the conditional expectation. Conversely, if we can only point identify the conditional expectation function m(a, x) τ nca (a, x nca ), where we only condition on the FTA and NCA scores since they are the two components of the DMF decision matrix. Because x fta and x nca are discrete with six values, we can further To understand how this additive treatment effect model facilitates robust policy learning, we inspect the upper bounds on the treatment effects as the confidence level changes.Figure 7shows these bounds for the different values of the FTA and NCA scores. As in Section 7.2 above, the bounds with zero confidence level correspond to bounds induced by the model class in the popu-parameterize the additive terms as six dimensional vectors. Importantly, this rules out interactions between the FTA and NCA scores in the effect. If this assumption is not credible, we could use a Lipschitz restriction as in Example 2. This alternative assumption may be significantly weaker, though it would require choosing the Lipschitz constant. lation. Because we can never observe a case where the DMF recommends a signature bond with either an FTA score or NCA score above 4, we cannot identify the additive model components for either variable above 4. Because of this, the upper bound on the effect of recommending a signature bond for these cases is 1, the maximum value. Similarly, we can never observe a case where the DMF recommends cash bail with either an FTA score below 2 or an NCA score below 3. This precludes assigning cash bail to these cases. In the middle is an intermediate area with FTA scores between 2 and 4 and NCA scores between 3 and 4 where we can fully identify the effect of assigning cash bail under the additive model. Note that unlike the NVCA threshold above, this means that new policies can only differ from the status quo in the less lenient direction, recommending cash bail for arrestees that would have a signature bond recommendation under the status quo. However, for values with an FTA score of 2 or an Cost: 1 Cost: 10 Cost: 2 Cost: 5 Cost: 6 Cost: 7 Confidence Level 0% Confidence Level 20% Confidence Level 50% Confidence Level 80% 2 4 6 2 4 6 2 4 6 2 4 6 2 4 6 2 4 6 2 4 6 2 4 6 2 4 6 2 4 6 NCA Score FTA Score PSA Recommendation Signature Bond Cash Bail tation choices under the proposed approach. While we consider several representative cases, there are many other structural assumptions that would lead to different forms of extrapolation. For instance, we could consider a global structure in the form of Reproducing Kernel Hilbert Spaces, or incorporate substantive restrictions such as monotonicity. In addition, while our study on pre-trial risk scores focused on discrete covariates, deterministic policies with continuous covariates opens the opportunity to directly identify treatment effects on the decision boundary, leading to a different form of restriction on the model class. We can also generalize this approach to consider cases where policies consist of both stochastic and deterministic components. This would nest the current deterministic case including the experimental setting discussed in Section 6.1. Second, we can use similar statistical tools to create tests of safety for given policies. By creating a worst-case upper bound on the regret of a policy relative to the status quo, we can test for whether this upper bound is below zero, rejecting the null that the proposed new policy is not an improvement over the existing status quo. Note that the particular form of the DMF has evolved since its implementation in Dane county, and now takes a different form. See https://advancingpretrial.org/guide/guide-to-the-release-condition-matrix/ To simplify the notation, we will fix this policy to be deterministic as well. The theoretical results developed in Sections 3.4 and 4.2 will also apply for stochastic policies, whereas in Section 5.2, we explicitly consider deterministic policy classes. These restrictions being on the decisions gives more opportunities for structural restrictions on the model. For example, we could make a monotonicity assumption that d(a, x) ≤ d(a , x) for a ≤ a . This is a consequence of the looser bound that the treatment effects are bounded between −1 and 1. If instead Appendix A Corollaries and an Additional ExampleA.1 Regret for bounded model classes Corollary 1 (Population optimality gap for bounded model class). Let π inf be a solution toCorollary 2 (Empirical optimality gap for bounded model class). Letπ be a solution to Equation (12). If the true conditional expectation m(a, x) ∈ M, then for any 0 < δ ≤ e −1 , the regret of π relative to the optimal policy π * ∈ argmax π∈Π V (π) iswith probability at least 1 − α − δ, where C = max y∈{0,1},a∈{0,1} |u(y, a)|.A.2 An additional exampleExample A.1 (Additive models). If the model class for action a consists of additive models, we havewhere the component functions f j (a, ·), f jk (a, ·), . . . can be subject to additional restrictions so that the decomposition is unique. This additive decomposition formulation amounts to an assumption that no interactions exist above a certain order. By using the same additive decomposition form(x) intom(x) = jm j (X j ) + j<km jk (X j , X k ) + . . ., we can follow the same bounding approach as in Example 2 for each of the component functions. For example, for the additive term for covariate j, m j (a, x j ), the Lipschitz property implies that, m j (x j ) − λ a |x j − x j | ≤ m j (a, x j ) ≤m(x j ) + λ a |x j − x j | ∀ x ∈X a . Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. J Angwin, J Larson, S Mattu, L Kirchner, Angwin, J., J. Larson, S. Mattu, and L. Kirchner (2016). Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. https://www. propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Policy Learning With Observational Data. S Athey, S Wager, Econometrica. 891Athey, S. and S. Wager (2021). Policy Learning With Observational Data. Econometrica 89 (1), 133-161. Counterfactual Probabilities: Computational Methods, Bounds and Applications. A Balke, J Pearl, Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence. the Tenth Conference on Uncertainty in Artificial IntelligenceBalke, A. and J. Pearl (1994). Counterfactual Probabilities: Computational Methods, Bounds and Applications. In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 46-54. Mostly exploration-free algorithms for contextual bandits. H Bastani, M Bayati, K Khosravi, Management Science. 673Bastani, H., M. Bayati, and K. Khosravi (2021). Mostly exploration-free algorithms for contextual bandits. Management Science 67 (3), 1329-1349. Theory and applications of robust optimization. D Bertsimas, D B Brown, C Caramanis, SIAM Review. 533Bertsimas, D., D. B. Brown, and C. Caramanis (2011). Theory and applications of robust opti- mization. SIAM Review 53 (3), 464-501. The offset tree for learning with partial labels. A Beygelzimer, J Langford, Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the ACM SIGKDD International Conference on Knowledge Discovery and Data MiningBeygelzimer, A. and J. Langford (2009). The offset tree for learning with partial labels. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 129- 137. A Quick Guide to SNAP Eligibility and Benefits. Center on Budget and Policy Priorities. Technical reportCenter on Budget and Policy Priorities (2017). A Quick Guide to SNAP Eligibility and Benefits. Technical report. On kernelized multi-armed bandits. S R Chowdhury, A Gopalan, 34th International Conference on Machine Learning. Chowdhury, S. R. and A. Gopalan (2017). On kernelized multi-armed bandits. 34th International Conference on Machine Learning, ICML 2017 2, 1397-1422. Counterfactual risk assessments, evaluation, and fairness. A Coston, A Mishler, E H Kennedy, A Chouldechova, FAT* 2020 -Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Coston, A., A. Mishler, E. H. Kennedy, and A. Chouldechova (2020). Counterfactual risk assess- ments, evaluation, and fairness. FAT* 2020 -Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 582-593. Individualized decision making under partial identification: three perspectives, two optimality results, and one paradox. Harvard Data Science Review. Y Cui, Just acceptedCui, Y. (2021). Individualized decision making under partial identification: three perspectives, two optimality results, and one paradox. Harvard Data Science Review . Just accepted. A Semiparametric Instrumental Variable Approach to Optimal Treatment Regimes Under Endogeneity. Y Cui, E Tchetgen Tchetgen, Journal of the American Statistical Association. 116533Cui, Y. and E. Tchetgen Tchetgen (2021). A Semiparametric Instrumental Variable Approach to Optimal Treatment Regimes Under Endogeneity. Journal of the American Statistical Associa- tion 116 (533), 162-173. Compas risk scales: Demonstrating accuracy equity and predictive parity. W Dieterich, C Mendoza, T Brennan, Northpointe Inc. Research DepartmentDieterich, W., C. Mendoza, and T. Brennan (2016). Compas risk scales: Demonstrating accu- racy equity and predictive parity. http://go.volarisgroup.com/rs/430-MBX-989/images/ ProPublica_Commentary_Final_070616.pdf. Northpointe Inc. Research Department. Doubly Robust Policy Evaluation and Learning. M Dudik, J Langford, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine LearningDudik, M. and J. Langford (2011). Doubly Robust Policy Evaluation and Learning. In Proceedings of the 28th International Conference on Machine Learning. Practical and Rigorous Uncertainty Bounds for Gaussian Process Regression. C Fiedler, C W Scherer, S Trimpe, Association for the Advancement of Artificial Intelligence. Fiedler, C., C. W. Scherer, and S. Trimpe (2021). Practical and Rigorous Uncertainty Bounds for Gaussian Process Regression. In Association for the Advancement of Artificial Intelligence. Randomized control trial evaluation of the implementation of the psa-dmf system in dane county. D J Greiner, R Halen, M Stubenberg, J Chistopher, L Griffen, Access to Justice Lab, Harvard Law SchoolTechnical reportGreiner, D. J., R. Halen, M. Stubenberg, and J. Chistopher L. Griffen (2020). Randomized control trial evaluation of the implementation of the psa-dmf system in dane county. Technical report, Access to Justice Lab, Harvard Law School. Maximizing intervention effectiveness. V Gupta, B R Han, S H Kim, H Paek, Management Science. 6612Gupta, V., B. R. Han, S. H. Kim, and H. Paek (2020). Maximizing intervention effectiveness. Management Science 66 (12), 5576-5598. Gurobi Optimizer Reference Manual. Gurobi Optimization, LLC (2021)Gurobi Optimization, LLC (2021). Gurobi Optimizer Reference Manual. Bayesian Regression Tree Models for Causal Inference: Regularization, Confounding, and Heterogeneous Effects. P R Hahn, J S Murray, C M Carvalho, Bayesian Analysis. Hahn, P. R., J. S. Murray, and C. M. Carvalho (2020). Bayesian Regression Tree Models for Causal Inference: Regularization, Confounding, and Heterogeneous Effects. Bayesian Analysis, 1-33. Statistics and Causal Inference: Rejoinder. P W Holland, Journal of the American Statistical Association. 81396968Holland, P. W. (1986). Statistics and Causal Inference: Rejoinder. Journal of the American Statistical Association 81 (396), 968. Principal fairness for human and algorithmic decision-making. K Imai, Z Jiang, Imai, K. and Z. Jiang (2020). Principal fairness for human and algorithmic decision-making. arxiv preprint https://arxiv.org/pdf/2005.10400.pdf. Experimental Evaluation of Computer-Assisted Human Decision-Making: Application to Pretrial Risk Assessment Instrument (with discussion). K Imai, Z Jiang, D J Greiner, R Halen, S Shin, Statistics in Society), Forthcoming. arxiv preprint. Imai, K., Z. Jiang, D. J. Greiner, R. Halen, and S. Shin (2020). Experimental Evaluation of Computer-Assisted Human Decision-Making: Application to Pretrial Risk Assessment Instru- ment (with discussion). Journal of the Royal Statistical Society, Series A (Statistics in Society), Forthcoming. arxiv preprint https://arxiv.org/pdf/2012.02845.pdf. Optimized regression discontinuity designs. The Review ofEconomics and Statistics. G Imbens, S Wager, 101Imbens, G. and S. Wager (2019). Optimized regression discontinuity designs. The Review ofEco- nomics and Statistics 101 (May), 264-278. Balanced policy evaluation and learning. N Kallus, Advances in Neural Information Processing Systems. Kallus, N. (2018). Balanced policy evaluation and learning. Advances in Neural Information Processing Systems 2018-December (1), 8895-8906. Minimax-optimal policy learning under unobserved confounding. N Kallus, A Zhou, Management Science. 675Kallus, N. and A. Zhou (2021). Minimax-optimal policy learning under unobserved confounding. Management Science 67 (5), 2870-2890. A model to predict survival in patients with end-stage liver disease. P S Kamath, R H Wiesner, M Malinchoc, W Kremers, T M Therneau, C L Kosberg, G , E R Dickson, W R Kim, Hepatology. 332Kamath, P. S., R. H. Wiesner, M. Malinchoc, W. Kremers, T. M. Therneau, C. L. Kosberg, G. D'amico, E. R. Dickson, and W. R. Kim (2001). A model to predict survival in patients with end-stage liver disease. Hepatology 33 (2), 464-470. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. S Kannan, J Morgenstern, A Roth, B Waggoner, Z S Wu, Advances in Neural Information Processing Systems. Kannan, S., J. Morgenstern, A. Roth, B. Waggoner, and Z. S. Wu (2018). A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. Advances in Neural Information Processing Systems 2018-December (NeurIPS), 2227-2236. Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice. T Kitagawa, A Tetenov, Econometrica. 862Kitagawa, T. and A. Tetenov (2018). Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice. Econometrica 86 (2), 591-616. Human decisions and machine predictions. J Kleinberg, H Lakkaraju, J Leskovec, J Ludwig, S Mullainathan, Quarterly Journal of Economics. 1331Kleinberg, J., H. Lakkaraju, J. Leskovec, J. Ludwig, and S. Mullainathan (2018). Human decisions and machine predictions. Quarterly Journal of Economics 133 (1), 237-293. Metalearners for estimating heterogeneous treatment effects using machine learning. S R Künzel, J S Sekhon, P J Bickel, B Yu, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America116Künzel, S. R., J. S. Sekhon, P. J. Bickel, and B. Yu (2019). Metalearners for estimating heteroge- neous treatment effects using machine learning. Proceedings of the National Academy of Sciences of the United States of America 116 (10), 4156-4165. Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy. A R Luedtke, M J Van Der Laan, Annals of Statistics. 442Luedtke, A. R. and M. J. Van Der Laan (2016). Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy. Annals of Statistics 44 (2), 713-742. Social Choice with Partial Knowledge of Treatment Response. C F Manski, Princeton University PressManski, C. F. (2005). Social Choice with Partial Knowledge of Treatment Response. Princeton University Press. Learning optimal distributionally robust individualized treatment rules (with discussion). W Mo, Z Qi, Y Liu, Journal of the American Statistical Association. forthcomingMo, W., Z. Qi, and Y. Liu (2020). Learning optimal distributionally robust individualized treatment rules (with discussion). Journal of the American Statistical Association, forthcoming. Definitions, methods, and applications in interpretable machine learning. W J Murdoch, C Singh, K Kumbier, R Abbasi-Asl, B Yu, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America116Murdoch, W. J., C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences of the United States of America 116 (44), 22071-22080. On the application of probability theory to agricultural experiments. essay on principles. section 9. J Neyman, Statistical Science. 54Neyman, J. (1990 [1923]). On the application of probability theory to agricultural experiments. essay on principles. section 9. Statistical Science 5 (4), 465-472. Quasi-oracle estimation of heterogeneous treatment effects. X Nie, S Wager, Biometrika. 1082Nie, X. and S. Wager (2021). Quasi-oracle estimation of heterogeneous treatment effects. Biometrika 108 (2), 299-319. Estimating optimal treatment rules with an instrumental variable: A partial identification learning approach. H Pu, B Zhang, Journal of the Royal Statistical Society Series B. Pu, H. and B. Zhang (2021). Estimating optimal treatment rules with an instrumental variable: A partial identification learning approach. Journal of the Royal Statistical Society Series B , 1-28. Performance guarantees for individualized treatment rules. M Qian, S A Murphy, The Annals of Statistics. 392Qian, M. and S. A. Murphy (2011). Performance guarantees for individualized treatment rules. The Annals of Statistics 39 (2), 1180-1210. Greedy Algorithm almost Dominates in Smoothed Contextual Bandits. M Raghavan, A Slivkins, J W Vaughan, Z S Wu, Raghavan, M., A. Slivkins, J. W. Vaughan, and Z. S. Wu (2021). Greedy Algorithm almost Dominates in Smoothed Contextual Bandits. Random Features for Large-Scale Kernel Machines. A Rahimi, B Recht, Advances in Neural Information Processing Systems. 20Rahimi, A. and B. Recht (2008). Random Features for Large-Scale Kernel Machines. In Advances in Neural Information Processing Systems, Volume 20. P R Rosenbaum, Observational Studies. New YorkSpringer-Verlag2 ed.Rosenbaum, P. R. (2002). Observational Studies (2 ed.). New York: Springer-Verlag. Randomization Analysis of Experimental Data: The Fisher Randomization Test. D B Rubin, Journal of the American Statistical Association. 75371Comment onRubin, D. B. (1980). Comment on "Randomization Analysis of Experimental Data: The Fisher Randomization Test". Journal of the American Statistical Association 75 (371), 591-593. . I Ruczinski, Lecture Notes on Simultaneous Confidence Intervals. Ruczinski, I. (2002). Lecture Notes on Simultaneous Confidence Intervals. The age of secrecy and unfairness in recidivism prediction. C Rudin, C Wang, B Coker, Harvard Data Science Review. 31Rudin, C., C. Wang, and B. Coker (2020, 3). The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review 2 (1). https://hdsr.mitpress.mit.edu/pub/7z10o269. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. N Srinivas, A Krause, S M Kakade, M Seeger, Proceedings of the 27th International Conference on Machine Learning. the 27th International Conference on Machine LearningSrinivas, N., A. Krause, S. M. Kakade, and M. Seeger (2010). Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. Proceedings of the 27th International Conference on Machine Learning (ICML 2010), 1015-1022. policytree: Policy learning via doubly robust empirical welfare maximization over trees. E Sverdrup, A Kanodia, Z Zhou, S Athey, S Wager, Journal of Open Source Software. 5502232Sverdrup, E., A. Kanodia, Z. Zhou, S. Athey, and S. Wager (2020). policytree: Policy learning via doubly robust empirical welfare maximization over trees. Journal of Open Source Software 5 (50), 2232. Batch learning from logged bandit feedback through counterfactual risk minimization. A Swaminathan, T Joachims, Journal of Machine Learning Research. 16Swaminathan, A. and T. Joachims (2015). Batch learning from logged bandit feedback through counterfactual risk minimization. Journal of Machine Learning Research 16, 1731-1755. M J Wainwright, High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University PressWainwright, M. J. (2019). High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. Simultaneous Confidence Bands in Regression Analysis. H P Wynn, P Bloomfield, Journal of the Royal Statistical Society Series B. 332Wynn, H. P. and P. Bloomfield (1971). Simultaneous Confidence Bands in Regression Analysis. Journal of the Royal Statistical Society Series B 33 (2), 202-217. Estimating optimal treatment regimes from a classification perspective. B Zhang, A A Tsiatis, M Davidian, M Zhang, E Laber, Stat. 11Zhang, B., A. A. Tsiatis, M. Davidian, M. Zhang, and E. Laber (2012). Estimating optimal treatment regimes from a classification perspective. Stat 1 (1), 103-114. Estimating individualized treatment rules using outcome weighted learning. Y Zhao, D Zeng, A J Rush, M R Kosorok, Journal of the American Statistical Association. 107499Zhao, Y., D. Zeng, A. J. Rush, and M. R. Kosorok (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association 107 (499), 1106-1118.
[]
[ "CONNECTEDNESS OF KISIN VARIETIES ASSOCIATED TO ABSOLUTELY IRREDUCIBLE GALOIS REPRESENTATIONS", "CONNECTEDNESS OF KISIN VARIETIES ASSOCIATED TO ABSOLUTELY IRREDUCIBLE GALOIS REPRESENTATIONS" ]
[ "Miaofen Chen ", "Sian Nie " ]
[]
[]
We consider the Kisin variety associated to a n-dimensional absolutely irreducible mod p Galois representationρ of a p-adic field K and a cocharacter µ. Kisin conjectured that the Kisin variety is connected in this case. We show that Kisin's conjecture holds if K is totally ramfied with n = 3 or µ is of a very particular form. As an application, we also get a connectedness result for the deformation ring associated toρ of given Hodge-Tate weights. We also give counterexamples to show Kisin's conjecture does not hold in general.
10.1515/crelle-2022-0011
[ "https://arxiv.org/pdf/2007.06861v1.pdf" ]
220,514,486
2007.06861
9a40612393b1eeac9141e3b8f73b260a29368b78
CONNECTEDNESS OF KISIN VARIETIES ASSOCIATED TO ABSOLUTELY IRREDUCIBLE GALOIS REPRESENTATIONS 14 Jul 2020 Miaofen Chen Sian Nie CONNECTEDNESS OF KISIN VARIETIES ASSOCIATED TO ABSOLUTELY IRREDUCIBLE GALOIS REPRESENTATIONS 14 Jul 2020arXiv:2007.06861v1 [math.AG] We consider the Kisin variety associated to a n-dimensional absolutely irreducible mod p Galois representationρ of a p-adic field K and a cocharacter µ. Kisin conjectured that the Kisin variety is connected in this case. We show that Kisin's conjecture holds if K is totally ramfied with n = 3 or µ is of a very particular form. As an application, we also get a connectedness result for the deformation ring associated toρ of given Hodge-Tate weights. We also give counterexamples to show Kisin's conjecture does not hold in general. Introduction Let K be a finite extension of Q p with p > 2. Letρ : Γ K → GL n (F) be a n-dimensional continuous representation of the absolute Galois group Γ K over a finite field F for any n ∈ N. Kisin constructed in [21] a projective scheme C µ (ρ) over F which parametrizes the finite flat group schemes over O K with generic fiberρ satisfying some determinant condition determined by µ. These varieties were later called Kisin varieties by Pappas and Rapoport ([27]). The set of connected components π 0 (C µ (ρ)) of the Kisin variety C µ (ρ) is of particular interest among the other topological invariants. Kisin showed in [21] that π 0 (C µ (ρ)) is in bijection with the set of connected components of the generic fiber of the flat deformation ring ofρ with condition on Hodge-Tate weights related to µ. Moreover, Kisin determined π 0 (C µ (ρ)) when K is totally ramified over Q p withρ two dimensional and gave application to modularity lifting theorem. Kisin also conjectured for π 0 (C µ (ρ)) whenρ is indecomposable ( [21] 2.4.16). When n = 2, the conjecture says precisely that the Kisin variety has at most two connected components and this conjecture has been solved by Gee [11], Hellmann [16], Imai [19] and Kisin [21]. For general n, we know that C µ (ρ) consists at most 1 point if the ramification index e(K/Q p ) < p − 1 by Raynaud's result ( [30], 3.3.2). Besides this result, very little is known about the set of connected components. Recently, Erickson and Levin studied in [8] Harder-Narasimhan theory for Kisin modules and defined a stratification on Kisin varieties by Harder-Narasimhan polygons. In this paper, we want to study the connectedness of Kisin varieties C µ (ρ) for an absolutely irreducible reprensentationρ. In this case, Kisin's conjecture says: Conjecture 1.1 (Kisin). Ifρ is absolutely irreducible, then C µ (ρ) is connected. We will prove the conjecture when µ is of a particular form or n = 3 with K totally ramified. We also give counterexamples to show Kisin's conjecture does not hold in general. We first reformulate Kisin variety in a group theoretic way. Let k be the residue field of K. Let π be a uniformizer of K and let π n := π 1 p n be a compatible system of p n -th root of π for all n ∈ N. Denote K ∞ := ∪ n K(π n ). Thenρ |ΓK ∞ is still absolutely irreducible ([1] 3.4.3). By [33], the absolute Galois group Γ K∞ is canonically isomorphic to the absolute Galois group Γ k((u)) of the field of Laurent series. Hence the F-representations of Γ K∞ can be described in terms ofétale ϕ-modules over k ⊗ Fp F((u)) (cf. [9]), where the Frobenius ϕ acts on k ⊗ Fp F((u)) as identity on F and as p-power map on k((u)). Let (Nρ, Φρ) be the ϕ-module of rank n over k ⊗ Fp F((u)) associated to the Tate-twistρ(−1) |ΓK ∞ . Then (Nρ, Φρ) ≃ ((k ⊗ Fp F((u))) n , bϕ) for some b ∈ G(F((u))) with G = Res k|Fp (GL n ). In the following, we will write C µ (b) for the Kisin variety C µ (ρ). Let T be a maximal torus of G. Let Y + denotes the set of dominant cocharacters of T with respect to a fixed Borel subgroup of G containing T . It's a partially ordered set with Bruhat order denoted by ≤. LetF p be an algebraic closure of the finite field F p and L :=F p ((u)) be the field of Laurent series with coefficient inF p . By Breuil-Kisin classification [20], the Kisin variety can be described as follows: C µ (b)(F p ) = {g ∈ G(L)/G(O L )|g −1 bσ(g) ∈ ∪ ν∈Y + ν≤µ G(O L )u ν G(O L )} where the determinant condition µ can be interpreted as a dominant cocharacter µ ∈ Y + and σ : G(L) → G(L) is induced from ϕ. So the Kisin variety C µ (b)⊗F p can be seen as a closed subvariety in the affine Grassmannian F G = G(L)/G(O L ) which is an ind-scheme overF p . The isomorphism class of C µ (b) ⊗F p depends on the σ-conjugacy class of b in G(L). The group theoretic definition of Kisin varieties resembles that of affine Deligne-Lusztig varieties which are moduli spaces of p-divisible groups with additional structures. Their only difference is the definition of Frobenius σ. For affine Deligne-Lusztig varieties, it's an isomorphism. But it's no longer the case for Kisin varieties. This change makes Kisin varieties much harder to study compared to affine Deligne-Lusztig varieties. Much is known about the structure of affine Deligne-Lusztig varieties by the study of many people, such as the non-emptyness ( [11], [13], [17], [22], [24], [29]), dimension formula ( [12], [14], [31], [36]), set of connected components ( [4], [5], [6], [18], [25], [32]) and set of irreducible components up to group action ( [15], [26], [34], [35]). One of the powerful tools to study affine Deligne-Lusztig varieties is the semi-module stratification which arises in a group theoretic way. For example, de Jong and Oort used this stratification to show that each connected component of superbasic affine Deligne-Lusztig variety for GL n is irreducible ( [7]). The second author used this stratification to give a proof of a conjecture of Xinwen Zhu and the first author about the irreducible components of affine Deligne-Lusztig varieties ( [26], with another proof using twisted orbital integrals given by Zhou and Zhu [35]). In this paper, we want to apply this tool to the study of Kisin varieties. The reason we restrict ourselves toρ absolutely irreducible case is that we want b to be simple (cf. §3) so that we can use Caruso's classification of simple ϕ-modules ( [3]) to get a good representative b in its σ-conjugacy class (Proposition 3.3). Note that G |Fp ≃ τ ∈Hom(k,Fp) GL n . An element µ ∈ Y + can be written in the form µ = (µ τ ) τ with µ τ = (µ τ,1 , · · · , µ τ,n ) ∈ Z n,+ := {(a 1 , · · · , a n ) ∈ Z n |a 1 ≥ a 2 ≥ · · · ≥ a n }. Theorem (4.1, 4.6). Suppose b is simple and the Kisin variety C µ (b) is nonempty, then C µ (b) is geometrically connected if one of the following two conditions are satisfied: (1) µ = (µ τ ) τ with µ τ,1 ≥ µ τ,2 = µ τ,3 = · · · = µ τ,n for all τ ∈ Hom(k,F p ); (2) K is totally ramified and n = 3 (i.e. G = GL 3 ). The first part of the theorem recovers the main result of [16] when n = 2. We also give counterexamples to show that Kisin's conjecture does not hold in general (cf. §4.3). As an application of this result, we obtain a connectedness result of deformation ring ofρ with conditions on Hodge-Tate weights. Supposeρ : Γ K → GL n (F) is absolutely irreducible and flat. Let R f l be the flat deformation ring ofρ in the sense of Ramakrishna ([28]). Consider a minuscule cocharacter ν : G m,Qp → (Res K|Qp T n )Q p , where T n is the maximal torus of GL n consisting of diagonal matrices. Write R f l,ν be the quotient of R f l corresponding to deformations of Hodge-Tate weights given by ν. To ν, we can associate a cocharacter µ(ν) : G m,Fp → Res k|Fp (T n )F p . Corollary (5.1). The scheme Spec(R fl,ν [ 1 p ]) is connected if one of the following two conditions holds: (1) µ(ν) = (µ(ν) τ ) τ with µ(ν) τ = (µ(ν) τ,1 , · · · , µ(ν) τ,n ) such that µ(ν) τ,1 ≥ µ(ν) τ,2 = · · · = µ(ν) τ,n for all τ ∈ Hom(k,F p ); (2) K is totally ramified and n = 3. We now a give brief outline of the article. In Section 2, we introduce the semimodule stratification which is the main tool for us to study the geometry of Kisin varieties. In Section 3, by using Caruso's classification of simple ϕ-modules, we get a good representative b in its σ-conjugacy class for simple elements. In section 4, we prove the main theorems about the connectedness of Kisin varieties and also give some counterexamples for Kisin's conjecture. In section 5, we apply the main theorem to get a connectedness result for deformation rings. first author was partially supported by NSFC grant No.11671136 and STCSM grant No.18dz2271000. The second author was partially supported by NSFC (Nos.11922119, 11688101 and 11621061) and QYZDB-SSW-SYS007. Semi-module stratification on Kisin varieties In this section, suppose p is a prime number (we allowed p = 2). Let G be a reductive group over F p . Fix T ⊆ B ⊆ G with T a maximal torus and B a Borel subgroup of G. Let R = (X, Φ, Y, Φ ∨ ) be the associated root datum, where Φ (resp. Φ ∨ ) is the set of roots (resp. coroots) of G; X (resp. Y ) is the character group (resp. cocharacter group) of T , together with a perfect pairing , : X × Y → Z. Denote by Φ + the set of positive roots appearing in B, and by Y + the corresponding set of dominant cocharacters. On Y + we have the Bruhat order denoted by ≤. Let G(L) be the group of L-points of G. We have the Cartan decomposition on G(L): G(L) = µ∈Y + G(O L )u µ G(O L ). It's a stratification in the strict sense, i.e., for any µ ∈ Y + , G(O L )u µ G(O L ) = ν∈Y + ν≤µ G(O L )u ν G(O L ). Fix an automorphism σ 0 of G that fixes T and B. Denote by σ the endomorphism of G(L) induced from ϕ and σ 0 as follows: σ : G(L) σ0 −→ G(L) ϕL −→ G(L). Let F be a finite extension of F p . Let b ∈ G(F((u))) and µ ∈ Y + such that F contains the reflex field of the conjugacy class of µ. Then the associated (closed) Kisin variety is defined over F and withF p -points given by C µ (b)(F p ) = {g ∈ G(L); g −1 bσ(g) ∈ G(O L )u µ G(O L )}/G(O L ). In the literature, Kisin varieties were defined for the groups of the form G = Res k|Fp H and the automorphism σ 0 is induced by the Frobenius relative to k|F p , where k is finite extension of F p and H is a reductive group over k. (cf. [27], [8]). In this case, the Kisin variety has the moduli interpretation that parametries some finite flat models with H-structure of a Galois representation with value in H ( [23]). 2.2. Semi-module stratification of Kisin varieties. In the rest of this section, we are interested in G × FpFp and C µ (b) × FFp but not their rational structure. Let N T ⊆ G be the normalizer of T . Denote by W 0 = N T (L)/T (L) the Weyl group, and denote byW = Y ⋊ W 0 = {u τ w; λ ∈ Y, w ∈ W 0 } the Iwahori-Weyl group. There is a natural action ofW on the vector space Y R = Y ⊗ R. Forw = u τ w ∈W we setẇ = u τẇ withẇ ∈ N T (F p ) some/any lift of w. As σ and ϕ G preserve T (L), we still denote by σ and σ 0 the induced endomorphisms on Y respectively. Notice that σ 0 is an automorphism of the root datum R and σ = pσ 0 . Let I ⊆ G(L) be the Iwahori subgroup associated to the fundamental alcove ∆ = {v ∈ Y R ; 0 < α, v < 1, ∀α ∈ Φ + }. Namely, I is the preimage of B − (F p ) under natural map G(O L ) u →0 −→ G(F p ), where B − is the opposite Borel subgroup of B. Denote by I der = [I, I] the derived subgroup of I. For any r ∈ N, denote by T (1 + u r O L ) the image of (1 + u r O L ) n ⊂ G n m (L) via an isomorphism of algebraic groups G n m ≃ T |Fp , where n is the rank of T . It's easy to see T (1 + u r O L ) does not depend on the choice of the isomorphism G n m ≃ T |Fp . Lemma 2.1. Let b =ẇ withw ∈W such that the (unique) fixed point ofwσ lies in ∆. Then the Lang's map Ψ : h → h −1 bσ(h)b −1 restricts to a bijection I der ∼ = I der . Proof. The first condition implies that bσ(I)b −1 ⊆ I and hence bσ(I der )b −1 ⊆ I der . It follows that Ψ(I der ) ⊆ I der . To show Ψ is a bijection, it suffices to show the action h → bσ(h)b −1 is topologically unipotent on I der . Let e ∈ ∆ be the fixed point ofwσ. For r ∈ R >0 we denote by I der,r the Moy-Prasad subgroup generated by T (1 + u ⌈r⌉ O L ) and U α (u k O L ) such that − α, e + k r. It suffices to show bσ(I der,r )b −1 ⊆ I der,pr . Indeed, writew = u τ w with τ ∈ Y and w ∈ W 0 . Then e = wσ(e) + τ = pwσ 0 (e) + τ . One computes that bσ(T (1 + u ⌈r⌉ O L ))b −1 = T (1 + u p⌈r⌉ O L ) ⊆ T (1 + u ⌈pr⌉ O L ); bσ(U α (u k O L ))b −1 = U wσ0(α) (u pk+ wσ0(α),τ O L ) ⊆ I der,pr , where the second inclusion follows from that − wσ 0 (α), e + pk + wσ 0 (α), τ = − wσ 0 (α), pwσ 0 (e) + τ + pk + wσ 0 (α), τ = p(− α, e + k) ≥ pr. This finishes the proof. For λ ∈ Y , let U + λ (resp. U − λ ) be the maximal unipotent subgroup of G generated by U α such that λ α 0 (resp. λ α < 0), where λ α = λ, α , if α ∈ Φ − ; λ, α − 1, if α ∈ Φ + . Note that U + λ and U − λ are opposite to each other. If there is no confusion, we also write U + λ (resp. U − λ ) for U + λ (L) (resp. U − λ (L)). Consider the following semi-module decomposition C µ (b)(F p ) = ⊔ λ∈Y C λ µ (b)(F p ), where each piece C λ µ (b) is locally closed subscheme of C µ (b)× FFp withF p -points C λ µ (b)(F p ) = (Iu λ G(O L )/G(O L )) ∩ C µ (b)(F p ). Proposition 2.2. Let b =ẇ, wherew = u τ w ∈W is as in Lemma 2.1 for some τ ∈ Y and w ∈ W 0 . The following conditions are equivalent: (1) C λ µ (b) is non-empty; (2) u λ ∈ C µ (b)(F p ); (3) λ ♮ := −λ + τ + wσ(λ) ≤ µ. Moreover, under these equivalent conditions we have (a) C λ µ (b) is connected; (b) C µ (b) = {u λ } if and only if I ∩ U + λ ∩ u λ G(O L )u µ G(O L )u −λ † = u λ U + λ (O L )u −λ , where λ † = τ + wσ(λ); (c) C λ µ (b) is irreducible of dimension |R(λ)| if µ is minuscule, where R(λ) = {α ∈ Φ; λ α 1, α, λ ♮ = −1}. Proof. We first show the three conditions are equivalent. Note that u −λ bσ(u λ ) = u λ ♮ . It follows the implications (3) ⇒ (2) ⇒ (1). It remains to show the implication (1) ⇒ (3). Let ι : I der → I der be the endomorphism of I der given by h → bσ(h)b −1 . Let Ψ : I der → I der be the Lang's map given by h → h −1 ι(h). By Lemma 2.1, Ψ is an isomorphism whose inverse is given by h → · · · ι 2 (h) −1 ι(h) −1 h −1 . Denote by π λ : I der → Iu λ G(O L )/G(O L ) the surjective map given by h → hu λ G(O L ) . Then one computes that π −1 λ (C λ µ (b)(F p )) = Ψ −1 (I der ∩ u λ G(O L )u µ G(O L )u −λ † ), with λ † = τ + wσ(λ). By the Iwasawa decomposition, we have I der = (I ∩ U − λ )T (1 + uO L )(I ∩ U + λ ). Noticing that u −λ (I ∩ U − λ )T (1 + uO L )u λ ⊆ I, we deduce that I der ∩ u λ G(O L )u µ G(O L )u −λ † (2.2.1) = (I ∩ U − λ )T (1 + uO L ) ((I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † ). Suppose C λ µ (b)(F p ) is non-empty, that is, I der ∩ u λ G(O L )u µ G(O L )u −λ † = ∅. Thus ∅ = (u −λ (I ∩ U + λ )u λ † ) ∩ G(O L )u µ G(O L ) ⊆ U + λ u λ ♮ ∩ G(O L )u µ G(O L ), which means that λ ♮ ≤ µ and the implication (1) ⇒ (3) is proved. Now suppose that the three equivalent conditions are satisfied. Choose n ≫ 0 such that both I der ∩u λ G(O L )u µ G(O L )u −λ † and I der ∩u λ G(O L )u −λ are invariant under the left multiplication by I der,n , see the proof of Lemma 2.1. Then Ψ induces an automorphism Ψ n : I der /I der,n ∼ → I der /I der,n . Now we have C λ µ (b)(F p ) = π λ • Ψ −1 (I der ∩ u λ G(O L )u µ G(O L )u −λ † ) ∼ = Ψ −1 (I der ∩ u λ G(O L )u µ G(O L )u −λ † )/I der ∩ u λ G(O L )u −λ ∼ = Ψ −1 n (I der ∩ u λ G(O L )u µ G(O L )u −λ † /I der,n )/((I der ∩ u λ G(O L )u −λ )/I der,n ). As Ψ n is an isomorphism, to show C λ µ (b) is connected, it suffices to show I der ∩ u λ G(O L )u µ G(O L )u −λ † /I der,n is connected. In view of (2.2.1) and that ( I ∩ U − λ )T (1 + uO L ) is connected, this is equivalent to show (I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † /I der,n ∩ U + λ is connected. Let ξ be a regular dominant cocharacter with respect to U + λ . Then the map z → z ξ hz −ξ defines an affine line connecting an arbitrary point h ∈ (I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † /I der, n ∩ U + λ and the identity 1. This finishes the proof of (a). One computes that dim C λ µ (b) = dim(I der ∩ u λ G(O L )u µ G(O L )u −λ † )/I der,n − dim(I der ∩ u λ G(O L )u −λ )/I der,n = dim(I ∩ U − λ )T (1 + uO L )((I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † )/I der,n − dim(I ∩ U − λ )T (1 + uO L )(u λ U + λ (O L )u −λ )/I der,n = dim(I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † )/I der,n ∩ U + λ − dim(u λ U + λ (O L )u −λ )/I der,n ∩ U + λ , where the second equality follows from (2.2.1) and I der ∩ u λ G(O L )u −λ = (I ∩ U − λ )T (1 + uO L )(u λ U + λ (O L )u −λ ). By (a), C λ µ (b) = {u λ } if and only if dim C λ µ (b) = 0, that is, I ∩ U + λ ∩ u λ G(O L )u µ G(O L )u −λ † = u λ U + λ (O L )u −λ since u λ U + λ (O L )u −λ ⊆ I ∩ U + λ ∩ u λ G(O L )u µ G(O L )u −λ † with both sides connected and are invariant under left multiplication by u λ U + λ (O L )u −λ . Thus (b) is proved. Suppose µ is minuscule, then λ ♮ is conjugate to µ by (3). In particular, U + λ u λ ♮ ∩G(O L )u µ G(O L ) = U + λ (O L )u λ ♮ U + λ (O L ) = U + λ (O L )( α∈D U α (t −1 O L ))u λ ♮ , where D = {α ∈ Φ; λ α 0, α, λ ♮ = −1}. As λ ♮ is minuscule, all the root subgroups U α for α ∈ D commute with each other. Therefore, (I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † (*) = (I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † = u λ (u −λ (I ∩ U + λ )u λ u λ ♮ ∩ G(O L )u µ G(O L ))u −λ † = u λ ((u −λ (I ∩ U + λ )u λ u λ ♮ ) ∩ (U + λ (O L )( α∈D U α (t −1 O L ))u λ ♮ ))u −λ † = I ∩ ((u λ U + λ (O L )u −λ ) α∈D U α (u α,λ −1 O L ) = (u λ U + λ (O L )u −λ )(I ∩ ( α∈D U α (u α,λ −1 O L )) = (u λ U + λ (O L )u −λ ) α∈D I ∩ U α (u α,λ −1 O L ) = (u λ U + λ (O L )u −λ ) α∈D\R(λ) U α (u α,λ O L ) α∈R(λ) U α (u α,λ −1 O L ) = (u λ U + λ (O L )u −λ ) α∈R(λ) U α (u α,λ −1 F p ), where the fifth equality follows from that u λ U + λ (O L )u −λ ⊆ I; the sixth one follows from that I ∩ U + λ = α∈Φ,λα 0 (I ∩ U α ); the last but one equality follows from that I ∩ U α (u λ,α −1 O L ) = U α (u λ,α −1 O L ), if λ α 1; U α (u λ,α O L ), if λ α = 0. In view of 2.2.1 and (*), I der ∩ u λ G(O L )u µ G(O L )u −λ † /I der,n is irreducible. Hence C λ µ (b) is irreducible with dimension dim C λ µ (b) = dim(I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † )/I der,n ∩ U + λ − dim(u λ U + λ (O L )u −λ )/I der,n ∩ U + λ = |R(λ)|. This finishes the proof of (c). 2.3. Multi-copy case. Let G d be the product of d copies of G. For b ∈ G(L) and µ • ∈ Y d we define C µ• (b) overF p withF p -points: C µ• (b)(F p ) = {g • ∈ G d (L); g −1 • b • σ • (g • ) ∈ G d (O L )u µ• G d (O L )}/G d (O L ), where b • = (1, . . . , 1, b) ∈ G d (L) and σ • : G d (L) → G d (L) is the endomorphism given by (g 1 , g 2 , . . . , g d ) → (g 2 , . . . , g d , σ(g 1 )). We still denote by σ • the induced linear map Y d R → Y d R given by (v 1 , v 2 , . . . , v d ) → (v 2 , . . . , v d , σ(v 1 )).(b)(F p ) ։ C µ (b)(F p ). Proof. Note that G(O L )u µ1 G(O L ) · G(O L )u µ2 G(O L ) · · · G(O L )u µ d G(O L ) = G(O L )u µ G(O L ) by [2] 4.4.4. The lemma follows by direct computation. Suppose b =ẇ for somew = u τ w ∈W . Setw • = (1, . . . , 1,w) = u τ• w • with τ • = (0, . . . , 0, τ ) and w • = (1, . . . , 1, w). Let λ • ∈ Y d . Define λ ♮ • = −λ • + τ • + w • σ • (λ • ); R(λ • ) = d i=1 {α ∈ Φ; (λ i ) α 1, α, (λ ♮ • ) i = −1}. Moreover, we set C λ• µ• (b • )(F p ) = I d u λ• G d (O L )/G d (O L ) ∩ C µ• (b)(F p ). Proposition 2.4. Suppose µ • is minuscule and b is as in Lemma 2.1. Then C λ• µ• (b) = ∅ if and only if λ ♮ • is conjugate to µ • . Moreover, in this case, C λ• µ• (b • ) is irreducible of dimension |R(λ • )|. Proof. By Lemma 2.1, the endomorphism of I d der given by g • → b • σ • (g • )b • −1 is topologically unipotent. Hence the Lang's map I d der → I d der given by g • → g −1 • b • σ • (g • )b • −1 is a bijection. Moreover, the fixed point ofw • σ • lies in the fun- damental alcove of Y d R . Then the statement follows the same way as Proposition 2.2 (c). Classification of simple ϕ-modules Suppose G = Res k|Fp H with k = F q and H is a reductive group over k, equipped with σ 0 induced from the Frobenius relative to k|F p . We say b ∈ G(L) is simple if b / ∈ P (L) for any σ-stable proper parabolic subgroup P of G up to σ-conjugation. It's equivalent to say the ϕ-module ((k ⊗ FpFp ((u))) n , bϕ) over k ⊗F p ((u)) is irreducible, where ϕ is as in the introduction. Let B(G) be the set of σ-conjugacy classes in G(L). A σ-conjugacy class [b] ∈ B(G) is called simple if all/some representative in this σ-conjugacy class is simple. Let B(G) s be the subset of B(G) consisting of simple elements. Notice that C µ (b) × FFp depends on the [b] ∈ B(G). The goal of this section is to give a good representative of b in its σ-conjugacy class. There is a natural identification G(L) = f i=1 H(L) with f = [k : F p ]. Under this identification, we have σ : G(L) −→ G(L) (x 1 , · · · , x f ) −→ (ϕ L (x 2 ), · · · , ϕ L (x f ), ϕ L (x 1 )Y ≃ f i=1 Y H and X ≃ f i=1 X H . The endomorphism σ : Y → Y is given by (h 1 , h 2 , . . . , h f ) → (ph 2 , . . . , ph f , ph 1 ). Using the Frobenius relative toF p |k instead of the Frobenius relative tō F p |F p , we define similarly σ H : H(L) → H(L) and simple elements in H(L). We also define B(H) and B(H) s in the same way. We can easily check the following lemma. Lemma 3.1. There is a Shapiro bijection B(H) ≃ B(G) [b ′ ] → [b], where b = (b ′ , 1, · · · , 1) ∈ G(L) = f i=1 H(L). Moreover, it induces a bijection on the subset of simple elements B(H) s ≃ B(G) s . From now on, suppose H = GL n . Let T H (resp. B H ) be the subgroup of diagonal (resp. upper triangular) matrices. We have the following identifications: • X H = ⊕ n i=1 Ze i ≃ Z n , Y H = ⊕ n i=1 Ze ∨ i ≃ Z n with the pairing X H ×Y H → Z induced by e i , e ∨ j = δ i,j ; • Φ = ⊔ f i=1 Φ H , where Φ n = {α i,j = e i − e j ; 1 i = j n}; • Φ + = ⊔ f i=1 Φ + H , where Φ + H = {α i,j ; 1 i < j n}; • W 0 = W f 0,H with W 0,H = S n ; • ∆ = ∆ f H , with ∆ H = {(x 1 , · · · , x n ) ∈ R n |x 1 > x 2 > · · · > x n > x 1 − 1}. We will need the following result of Caruso. Proposition 3.2 (Caruso). Suppose [b] ∈ B(H) is simple. Then there exists a representative b = au τ ′ w ′ ∈ [b] where (1) a ∈ H(F p ) is central,(2) w ′ ∈ W 0,H = S n is the cyclic permutation given by i → i + 1 mod n for 1 ≤ i ≤ n; (3) τ ′ = (m, 0, · · · , 0) ∈ Y H = Z n such that m(q n ′ −1) q n −1 / ∈ Z for any n ′ |n with 1 ≤ n ′ < n. Proof. By [3, Corollaire 8], we may assume b = a ′ u τ ′ w ′ where w ′ as in (2), a ′ = diag(a ′ 1 , 1, · · · , 1) with a ′ 1 ∈F * p and τ ′ = (m, 0, · · · , 0). Let a = diag(c, · · · , c) with c ∈F * p such that c n = a ′ 1 . It's easy to check g −1 bσ H (g) = au τ ′ w ′ , where g = diag(c n−1 , · · · , c, 1). So (1) is also satisfied. The condition (3) follows from [3, Proposition 3] as a is central. Proposition 3.3. If b ∈ G(L) is simple, then it is σ-conjugate to some liftẇ ofw ∈W such that the fixed pointwσ lies in ∆ ∩ (R \ Z) f n ⊆ R f n = Y R . Proof. By Lemma 3.1, any simple element in B(G) is of the form [b], where b = (b ′ , 1, · · · , 1) and b ′ is a lift ofw ′ = u τ ′ w ′ , where τ ′ = (m, 0, . . . , 0) ∈ Y H = Z n and w ′ ∈ W 0,H = S n as in Proposition 3.2. Then b is a lift ofw = u τ w where τ = (τ ′ , 0, · · · , 0) ∈ Y and w = (w ′ , 1, · · · , 1) ∈ W . Let e (resp. e ′ ) be the fixed point ofwσ on Y R = R f n (resp.w ′ σ H on Y H,R = R n ). Claim: e ∈ (R\Z) f n . Moreover, e lies in some alcove ∆ 1 in Y R . By the description of b ′ , we can compute that e ′ = −( m p n − 1 , mp p n − 1 , . . . , mp n−1 p n − 1 ). Moreover mp i p n − 1 − mp j p n − 1 / ∈ Z for 0 i < j n − 1 by the condition (3) in Proposition 3.2 combined with the easy fact that gcd(q a1 − 1, q a2 − 1) = q gcd(a1,a2) − 1 for any positive integers a 1 and a 2 . In particular, this implies mp i−1 p n −1 / ∈ Z (for 1 i n) and hence e ′ ∈ (R \ Z) n . Note that the action of (bσ) f on G(L) = This implies the Claim. Let z ∈W such that ∆ 1 = z(∆) and letw = z −1w σ(z). Then the fixed point z −1 (e ′ ) ofwσ lies in ∆ as desired. Main results In this section, let G = Res k|Fp GL n . Then GF p ≃ (GL n ) f with f = [k : F p ]. We will prove two connectedness results for the Kisin variety C µ (b). 4.1. The first result is the following for µ of particular form. In order to prove this theorem, we need a connectedness result for Kisin varieties in the multi-copy case (cf. §2.3). For any positive integer d, consider the group G d . Let N = df . We fix two identifications: Theorem 4.1. Suppose b ∈ G(L) is simple and µ = (µ 1 , · · · , µ f ) ∈ (Z n ) f with µ i = (µ i,1 , · · · , µ i,n ) satisfying µ i,1 ≥ µ i,2 = · · · = µ i,n for all 1 ≤ i ≤ f . Then C µ (b) is geometrically connected if it is nonempty.Y d ≃ (Z n ) N v • = (v 1 , . . . , v d ) → (v 1 , · · · , v N ) where v i+(j−1)d = v i,j for any 1 ≤ i ≤ d and 1 ≤ j ≤ f with v i = (v i,1 , . . . , v i,f ) ∈ Y = (R n ) f and W d 0 ≃ (S n ) N w • = (w 1 , . . . , w d ) → (w 1 , · · · , w N ) where w i+(j−1)d = w i,j with w i = (w i,1 , . . . , w i,f ) ∈ W 0 = S f n . For v • ∈ Y d and w • ∈ W d 0 , we also define v N +1 := v 1 and w N +1 := w 1 . Let ω ∨ 1 = (1, 0, . . . , 0) ∈ Z n . Let µ • ∈ Y d = (Z n ) N such that µ k = m k ω ∨ 1 with m k ∈ {0, 1} for 1 k N . Let b ∈ G(L) be as in Lemma 2.1. We consider the variety C µ• (b). λ • ∈ Y d such that dim C λ• µ• (b) = 0. Moreover, C µ• (b • ) is connected. We first use this result to prove Theorem 4.1 Proof of Theorem 4.1. If χ ∈ Y is a central, then we have the identification C µ (b) = C µ+χ (u χ b). By replacing µ and b with µ + χ and u χ b respectively for some suitable central cocharacter χ, we may assume that µ = (m 1 ω ∨ 1 , . . . , m f ω ∨ 1 ) with m k ∈ Z 0 for 1 k f . By Proposition 3.3, we may assume further that b satisfies the condition in Lemma 2.1. The theorem then follows from Theorem 4.3 and Lemma 2.3. Now it remains to prove Theorem 4.3. We need some combinatorial preparations. For any v = (v(1), · · · , v(n)) ∈ R n , let • [v] = {v(k); 1 k n} ⊆ R, • v = v(1) + · · · + v(n) ∈ R, • δ(v) = v − n min[v] ∈ R, • h(v) = n i=1 ⌊v(i) − min[v]⌋ ∈ Z 0 . Note that h(v) ≤ δ(v) < h(v) + n − 1. Moreover, h(v) = 0 if and only if max[v] − min[v] < 1. Define ς(v) ∈ R n such that ς(v)(i) = v(i) − 1, if v(i) = max[v] v(i), otherwise. For l ∈ Z >0 , set ς l (v) = ς • · · · • ς(v) , where ς appears l times. i < j n. Let v, v ′ ∈ Z n − e such that v = v ′ . (1) δ(ς(v)) = δ(v) − 1 and h(ς(v)) = h(v) − 1 if h(v) 1; (2) h(ς(v)) = 0 if h(v) = 0; (3) v = v ′ if h(v) = h(v ′ ) = 0; (4) δ(v) δ(v ′ ) if and only if h(v) h(v ′ ); (5) δ(ς(v)) δ(ς(v ′ )) if δ(v) δ(v ′ ); Proof. Notice that [v] consists of exactly n elements. Then statements (1) and (2) follow by definition. (3). Suppose h(v) = h(v ′ ) = 0, that is, max[v]−min[v], max[v ′ ]−min[v ′ ] < 1. Note that v − v ′ ∈ Z n and v − v ′ = 0. If v − v ′ = 0, then there exist 1 i = j n such that v(i) − v ′ (i), v ′ (j) − v(j) ∈ Z 1 . Thus v(i) − v(j) = (v(i) − v ′ (i)) + (v ′ (i) − v ′ (j)) + (v(j) ′ − v(j)) > 1 − 1 + 1 = 1, which is a contradiction. (4). Suppose h(v) h(v ′ ). By (1) we may replace v, v ′ with ς h(v) (v), ς h(v) (v ′ ) respectively so that h(v) = 0. In particular, δ(v) < n − 1. To show δ(v) δ(v ′ ) we may assume h(v ′ ) δ(v ′ ) < n − 1. By (1) and (2) we have h(ς h(v ′ ) (v)) = h(ς h(v ′ ) (v ′ )) = 0, which means ς h(v ′ ) (v) = ς h(v ′ ) (v ′ ) by (3). Write [ς h(v ′ ) (v)] = [ς h(v ′ ) (v ′ )] = {x 1 , . . . , x n }, where x 1 < x 2 < · · · < x n ∈ R. As h(v) = 0, we deduce that [v] = {x h(v ′ )+1 , x h(v ′ )+2 , . . . , x n , x 1 + 1, x 2 + 1, . . . , x h(v ′ ) + 1}, which means that δ(v) = ς h(v ′ ) (v ′ ) − nx h(v ′ )+1 + h(v ′ ) ς h(v ′ ) (v ′ ) − nx 1 + h(v ′ ) = δ(v ′ ), where the equality holds if and only if h(v ′ ) = 0 = h(v). This finishes the proof of (4). (1) and (2), which means δ(ς(v)) δ(ς(v ′ )) by (4). (5). If δ(v) δ(v ′ ), then h(v) h(v ′ ) by (4). So h(ς(v)) h(ς(v ′ )) byLet σ • : G d (L) → G d (L) be the endomorphism defined in §2.3. Then the induced linear map σ • : (R n ) N = Y d R → Y d R = (R n ) N is given by (v 1 , v 2 , . . . , v N ) → (ǫ 1 v 2 , . . . ǫ N −1 v N , ǫ N v 1 ), where ǫ k = p if k ∈ dZ and ǫ k = 1 otherwise. Lemma 4.5. Let b and µ • as in Theorem 4.3. Let λ • , η • ∈ Y d such that C λ• µ• (b) and C η• µ• (b) are both non-empty. Then λ k = η k for any 1 ≤ k ≤ N . Proof. By Proposition 2.4, −λ • + τ • + w • σ • (λ • ) and −η • + τ • + w • σ • (η • ) are both conjugate to µ • . In particular, for any 1 ≤ k ≤ N , λ k − η k = ǫ k ( λ k+1 − η k+1 ). Therefore λ k − η k = ( N i=1 ǫ i )( λ k − η k ) = p f ( λ k − η k ), and the result follows. Proof of Theorem 4.3. We first show each connected component C contains some C λ• µ• (b) with dim C λ• µ (b) = 0. Indeed, let λ • ∈ Y d be such that C λ• µ• (b) ∩ C is non-empty and d λ• := dim I d u λ• G d (O L )/G d (O L ) is as small as possible. If dim C λ• µ• (b) > 0, then C λ• µ• (b) is irreducible of dimension 1 by Proposi- tion 2.4. So C λ• µ• (b) is a closed (non-projective) subvariety of the affine space I d u λ• G d (O L )/G d (O L ), whose closure C λ• µ• (b) must intersect I d u χ• G d (O L )/G d (O L ) for some χ • = λ • . So C χ• µ• (b • ) = ∅ and d χ• < d λ• since I d u χ• G d (O L )/G d (O L ) ⊆ I d u λ• G d (O L )/G d (O L ). This contradicts the choice of λ • . Therefore, dim C λ• µ• (b) = 0. It remains to show the uniqueness of λ • . We need two Claims. By assumption, b =ẇ for somew = u τ w ∈W such that the fixed point e ∈ Y R ofwσ lies in the fundamental alcove ∆. Set e • = (e, . . . , e) ∈ (Y R ) d and w • = (1, . . . , 1,w) = u τ• w • with τ • = (0, . . . , 0, τ ) and w • = (1, . . . , 1, w). Let λ • = λ • − e • ∈ (R n ) N . Claim 1 :λ k = ǫ k w k (λ k+1 ), if m k = 0; ς(ǫ k w k (λ k+1 )), if m k = 1. Claim 2 : there exists 1 ≤ k 0 ≤ N such that h(λ k0 ) = 0. Assume these two Claims for the moment. Suppose there exists another cocharacter η • ∈ Y d such that dim C η• µ• (b) = 0. Letη • = η • − e • . Then λ k = η k for any k by Lemma 4.5. We may assume δ(λ 1 ) δ(η 1 ), and it follows from Claim 1, Lemma 4.4 (4) and (5) that δ(λ k ) δ(η k ) and h(λ k ) h(η k ) for any k. By Claim 2, there exists k 0 such that h(η k0 ) = 0 and hence h(η k0 ) = 0 = h(λ k0 ). By Lemma 4.4 (3), we haveλ k0 =η k0 , which implies by Claim 1 thatλ k =η k for any k as desired. This proved the uniqueness of λ • . Now we prove Claim 1 and 2. For Claim 1, as e • =w • σ • (e • ) = τ • +w • σ • (e • ), we have λ ♮ • = −λ • + w • σ • (λ • ). Therefore we have (λ ♮ ) k = −λ k + ǫ k w k (λ k+1 ) . As µ • is minuscule, it follows from Proposition 2.4 that λ ♮ • is conjugate to µ • . It's equivalent to say (λ ♮ ) k and µ k = m k ω ∨ 1 are conjugate under S n . If m k = 0, Claim 1 follows. Now assume m k = 1, then (λ ♮ ) k and ω ∨ 1 are conjugate. So there exists 1 i 0 n such that (λ ♮ ) k (i) = 1 if i = i 0 and (λ ♮ ) k (i) = 0 otherwise. Equivalently, λ k (i) = ǫ k w k (λ k+1 )(i) if i = i 0 ǫ k w k (λ k+1 )(i) − 1 if i = i 0 . Let x = (w k (λ k+1 ))(i 0 ) ∈ [λ k+1 ]. Notice that [λ k ] consists of exactly n elements. In order to prove Claim 1, it suffices to show that x = max[λ k+1 ]. If it's not the case, then (w k (λ k+1 ))( i 1 ) = max [λ k+1 ] > x for some 1 i 1 = i 0 n. Thusλ k (i 1 ) = ǫ k max[λ k+1 ] andλ k (i 0 ) = ǫ k x − 1. Let α = α i1,i0 ∈ Φ. We will show α ∈ R(λ • ) which contradicts the fact that dim C λ• µ• (b) = |R(λ • )| = 0 by Proposition 2.4. Indeed, we have α, (λ ♮ ) k = (λ ♮ ) k (i 1 ) − (λ ♮ ) k (i 0 ) = −1, and moreover, α, λ k = α,λ k + α, e k = ǫ k max[λ k+1 ] − (ǫ k x − 1) + α, e k > 1 + α, e k . As e k lies in the fundamental alcove, we have −1 < α, e k < 1. Moreover α, λ k 2 if α > 0 and α, λ k 1 if α < 0. So (λ k ) α 1 and hence α ∈ R(λ • ). This finishes the proof of Claim 1. For Claim 2, suppose that h(λ k+1 ) 1 for any k. It follows from Claim 1 that min[λ k ] = ǫ k min[λ k+1 ] for any k. In particular, min[λ k ] = ( N l=1 ǫ l ) min[λ k ] = p f min[λ k ], which means that min[λ k ] = 0. This is impossible since b is simple and [e k ]∩Z = ∅ by Proposition 3.3. This proves Claim 2. Proof. By Proposition 3.2 and 3.3, we may assume b = au τ ω satisfies Lemma 2.1 with a ∈ G(F p ) central, τ ∈ Y and w ∈ S 3 of order 3. Obviously C µ (b) = C µ (u τ ω), so we may assume a = 1. Let S := {λ ∈ Y |λ ♮ ≤ µ} where λ ♮ := −λ + τ + wσ(λ) as before. By Proposition 2.2, it suffices to connect the points {u λ ∈ C µ (b)(F p )|λ ∈ S} inside C µ (b). The result follows from the following two claims. Claim 1: Suppose λ ′ = λ − α ∨ with λ, λ ′ ∈ S and α ∨ is a coroot, then u λ and u λ ′ are in the same geometrically connected component of C µ (b). Claim 2: For any different λ, λ ′ ∈ S, there exists a chain of elements λ 0 , · · · λ r ∈ S for some r ∈ N such that λ = λ 0 , λ ′ = λ r and λ i+1 − λ i is a coroot for any 0 ≤ i ≤ r − 1. We first prove Claim 1. Let U α ⊂ G be the root subgroup of G corresponding to α. We fix an isomorphism G a ≃ U α . For anyF p -algebra R, we write U α (x) ∈ U α (R) for the image of x ∈ R via G a ≃ U α . For any x ∈F p , let g(x) := u λ U α (u −1 x). We verify that g(x)K ∈ C µ (b)(F p ). Note that g(x) −1 bσg(x) = U α (−u −1 x)u λ ♮ U wα (u −p α). By a SL 2 -computation, we get U α (u −m x) ∈ U −α (u m x −1 )u −mα ∨ G(O L ) for any positive integer m. So g(x) −1 bσg(x) ∈ G(O L )u µ G(O L ) if λ ♮ − pwα ∨ ≤ µ, λ ♮ + α ∨ ≤ µ and λ ′♮ = λ ♮ + α ∨ − pwα ∨ ≤ µ. The first two conditions are implied by the third one as α, wα ∨ = −1, and the third condition is automatic as λ ′ ∈ S. Therefore, g(x) defines an affine line g : A 1 → C µ (b). As C µ (b) is projective, g extends uniquely to g : P 1 → C µ (b). It's easy to compute g(∞) = u λ ′ . Hence u λ and u λ ′ are in the same connected component. Now it remains to prove Claim 2. Let λ, λ ′ ∈ S. Then λ − λ ′ is in the coroot lattice by Lemma 4.5. Note that α ∨ 1 + wα ∨ 1 + w 2 α ∨ 1 = 0 for any coroot α ∨ 1 . It follows that we can always find a coroot α ∨ 1 such that λ ′ − λ = n 1 α ∨ 1 + n 2 wα ∨ 1 with n 1 = max{|n 1 |, |n 2 |, |n 1 − n 2 |}. In particular n 1 ≥ n 2 ≥ 0. We will prove by induction on n 1 . Let α ∨ i := w i−1 α ∨ 1 for i ∈ N. If n 2 = n 1 or 0, then λ ′ − λ = n 1 α ∨ is a multiple of the coroot α ∨ where α ∨ 1 equals to −α ∨ 3 (resp. α ∨ 1 ) if n 2 = n 1 (resp. n 2 = 0). Then λ + nα ∨ ∈ S for 0 ≤ n ≤ n 1 . We are done. Now we may assume n 1 > n 2 > 0. Claim 3: λ + α ∨ 1 ∈ S or λ − α ∨ 3 ∈ S. Suppose Claim 3 holds for the moment. Notice that −α ∨ 3 = α ∨ 1 + α ∨ 2 , we can apply induction hypothesis to the pair (λ − α ∨ 3 , λ ′ ) if λ − α ∨ 3 ∈ S or to the pair (λ + α ∨ 1 , λ ′ ) if λ + α ∨ 1 ∈ S. This proves Claim 1. Now it remains to prove Claim 3. Suppose Claim 3 does not hold. Without loose of generality, we may assume α 1 = (1, −1, 0) and α 2 = (0, 1, −1). Then λ ′♮ = λ ♮ − (n 1 + pn 2 )α ∨ 1 + (pn 1 − (p + 1)n 2 )α ∨ 2 = λ ♮ + (−n 1 − pn 2 , (p + 1)n 1 − n 2 , −pn 1 + (p + 1)n 2 ). In particular, (λ + α ∨ 1 ) ♮ = λ ♮ + (−1, p + 1, −p) (λ − α ∨ 3 ) ♮ = λ ♮ + (−1 − p, p, 1). Write λ ♮ = (a 1 , a 2 , a 3 ). As λ ′♮ ≤ µ and λ ♮ ≤ µ, we deduce that (λ + α ∨ 1 ) ♮ µ is equivalent to a 3 −p < min[µ] and (λ−α ∨ 3 ) ♮ µ is equivalent to a 3 +1 > max[µ]. Therefore, max[µ] − min[µ] < 1 + p. On the other hand, the fact that λ ′♮ ≤ µ and λ ♮ ≤ µ implies max[µ] − min[µ] ≥ n 1 + pn 2 ≥ p + 1, which is impossible. C µ (b)(F p ) = C µ (b)(F p ) = {u (2,1,1,0) , u (1,1,1,1) }. (b) Let G = Res k|Fp GL 3 with [k : F p ] = 2. Choose F containing k. Then the group G |F ≃ GL 3 × GL 3 . Let b = (u (2,0,1) (123), u (0,0,1) ) ∈ G(F) and µ = ((p + 1, 0, 0), (p, p, 0)), then C µ (b)(F) = C µ (b)(F p ) = {u χ , u χ ′ }, where χ = ((1, 0, 1), (0, 0, 1)) and χ ′ = ((1, 1, 0), (1, 0, 0)). Proof. For λ ∈ Y we set D(λ) = {α ∈ Φ; λ α 0, α, λ ♮ −1}. Claim: if λ ♮ is conjugate to µ, and λ α = 0 for α ∈ D(λ), then C λ µ (b) = {u λ }. Indeed, as λ ♮ is conjugate to µ, we have U + λ u λ ♮ ∩G(O L )u µ G(O L ) = U + λ (O L )u λ ♮ U + λ (O L ) = U + λ (O L )( α∈D(λ) U α (u α,λ ♮ O L ))u λ ♮ . For α ∈ D we have λ α = 0 by assumption and hence I ∩ U α (u α,λ + α,λ ♮ O L ) = u λ U α (O L )u −λ . Thus (I ∩ U + λ ) ∩ u λ G(O L )u µ G(O L )u −λ † = u λ U + λ (O L )u −λ α∈D(λ) I ∩ U α (u α,λ + α,λ ♮ O L ) = u λ U + λ (O L )u −λ , which means C λ µ (b) = {u λ } by Proposition 2.2 (b). This concludes the Claim. In the case (a), one checks directly that C λ µ (b) = ∅, or equivalently λ ♮ ≤ µ, if and only if λ = (1, 1, 1, 1) or λ = (2, 1, 1, 0). In the former case, we have Application to deformation spaces In this section, we assume p > 2. Supposeρ : Γ K → GL n (F) is absolutely irreducible and flat. Here flat means thatρ comes from a finite flat group scheme over O K . Let R f l be the flat deformation ring ofρ in the sense of Ramakrishna ([28]). It's a complete local noetherian W (F)-algebra. We consider the conjugacy class {ν} of a minuscule cocharacter ν : G m,Qp → (Res K|Qp GL n )Q p . Let T n be the maximal torus of GL n consisting of diagonal matrices. We may assume that ν is dominant and has image in Res K|Qp T n,Qp . Write ν = (ν δ ) δ∈Hom(K,Qp) with ν δ = (1, · · · , 1 v δ , 0, · · · , 0) ∈ X * (T n ) = Z n . We deonote by R f l,ν for the quotient of R f l corresponding to the deformations ξ : G K → GL n (O E ) with E an extension of W (F) [ 1 p ] that contains the reflex field of {ν}, such that Hodge-Tate weights given by ν, i.e., for any a ∈ K, det E (a|D cris (ξ) K /Fil 0 D cris (ξ) K ) = δ∈Hom(K,Qp) δ(a) v δ . The cocharcter ν comes from a cocharacter overZ p that we still denote by ν: ν : G m,Zp → (Res K|Zp T n )Z p . Therefore, we obtain a cocharacter µ(ν) : G m,Fp ν×Z pF p −→ (Res OK |Zp (T n )) ×Z pF p → Res k|Fp (T n )F p , where the second arrow is the natural map induced from O K ⊗ ZpFp → k ⊗ FpFp . More concretely, write µ(ν) = (µ(ν) τ ) τ ∈Hom(k,Fp) with µ(ν) τ ∈ X * (T n ) = Z n . Then for any τ ∈ Hom(k,F p ), µ(ν) τ = δ∈Hom(K,Qp ) δ=τ ν δ , whereδ is the embedding of the residue fields induced from δ. (1) µ(ν) = (µ(ν) τ ) τ with µ(ν) τ = (µ(ν) τ,1 , · · · , µ(ν) τ,n ) such that µ(ν) τ,1 ≥ µ(ν) τ,2 = · · · = µ(ν) τ,n for all τ ∈ Hom(k,F p ); (2) K is totally ramified and n = 3. Proof. Suppose the restriction to G K∞ of the Tate-Twistρ(−1) corresponding to an absolutely simple ϕ-module ((k ⊗ Fp F) n , bϕ) of rank n over k ⊗ Fp F, where b ∈ Res k|Fp GL n (F). By [21] 2.4.10, the connected components of Spec(R fl,ν [ 1 p ]) is in bijection with C µ(ν) (b). Then the result follows from Theorems 4.1 and 4.6. 2. 1 . 1Kisin variety for arbitary reductive groups. Recall that L =F p ((u)) and O L =F p [[u]]. Let ϕ L : L → L be the homomorphism given by i a i u i → i a i u pi , where a i ∈F p . Lemma 2. 3 . 3Let µ • = (µ 1 , . . . , µ d ) be a dominant cocharacter and let µ = µ 1 + · · · + µ d . Then the projection G d (L) → G(L) to the first factor induces a surjection C µ• L) stablizes each component of H(L) and its restriction to the first component equals to b ′ σ H . It follows that e = (e ′ , pe ′ , · · · , p f −1 e ′ ). Remark 4 . 2 . 42Theorem 4.1 is proved in [16, Theorem 1,1] for n = 2. Theorem 4. 3 . 3Let µ • and b be as above such that C µ• (b) = ∅. Then there exists a unique Lemma 4. 4 . 4Let e ∈ R n such that e(i) − e(j) / ∈ Z for 1 4. 2 . 2Now we show the second connectedness result for the totally ramified case with n = 3 (i.e. G = GL 3 ). Theorem 4 . 6 . 46Suppose G = GL 3 and b is simple, then C µ (b) is geometrically connected. 4. 3 . 3In general, C µ (b) is not connected. We give two counter-examples. Proposition 4 . 7 . 47(a) Let G = GL 4 , b = u (2,0,2,0) (1243) ∈ GL n (F p ) and µ = (2p − 1, p, p, 1). Then Iu λ G(O L )/G(O L ) = u λ G(O L )/G(O L ) and C λ µ = {u λ } since λ is central. In the latter case, one checks that λ satisfies the conditions of the Claim and it follows that C λ µ (b) = {u λ } as desired. In the case (b), one checks directly that C λ µ (b) = ∅ if and only if λ = χ ′ or λ = χ. If λ = χ ′ , then λ is dominant and minuscule, which means Iu λ G(O L )/G(O L ) = u λ G(O L )/G(O L ) and C λ µ (b) = {u λ }. If λ = χ, then the conditions of the Claim are satisfied and hence C λ µ (b) = {u λ } as desired. Corollary 5. 1 . 1The scheme Spec(R fl,ν [ 1 p ])is connected if one of the following two conditions holds: ) . )For the group H over k, we define T H ⊂ B H maximal torus and Borel subgroup of H such that T = Res k|Fp T H and B = Res k|Fp B H . We also define W 0,H , X H , Y H , ∆ H , Φ H , Φ + H in the same way for H. Then there is natural identifications Acknowledgments. We would like to thank Brandon Levin who encouraged us to work on this topic. We thank Hui Gao for useful discussions. We also thank Mark Kisin and Frank Calegari for their useful comments. The Integral p-adic Hodge Theory, Algebraic Geometry. C Breuil, Adv. Studies in Pure Math. 36C. Breuil, Integral p-adic Hodge Theory, Algebraic Geometry 2000, Azumino, Adv. Studies in Pure Math. 36(2002), 51-80. Groupes réductifs sur un corps local. F Bruhat, J Tits, Inst. HautesÉtudes Sci. Publ. Math. No. 41F. Bruhat and J. Tits, Groupes réductifs sur un corps local, Inst. HautesÉtudes Sci. Publ. Math. No. 41 (1972), 5-251. Sur la classification de quelques φ-modules simples. X Caruso, Mosc. Math. J. 9X. Caruso, Sur la classification de quelques φ-modules simples, Mosc. Math. J. 9 (2009), 562-568. Connected components of affine Deligne-Lusztig varieties in mixed characteristic. M Chen, M Kisin, E Viehmann, Compos. Math. 151M. Chen, M. Kisin and E. Viehmann, Connected components of affine Deligne- Lusztig varieties in mixed characteristic, Compos. Math. 151 (2015), 1697-1762. Connected components of closed affine Deligne-Lusztig varieties. L Chen, S Nie, Math. Ann. 375L. Chen and S. Nie, Connected components of closed affine Deligne-Lusztig vari- eties, Math. Ann. 375 (2019), 1355-1392. Connected components of closed affine Deligne-Lusztig varieties for Res E. L Chen, S Nie, F GLn J. Algebra. 546L. Chen and S. Nie, Connected components of closed affine Deligne-Lusztig varieties for Res E/F GLn J. Algebra 546 (2020), 1-26. Purity of the stratification by Newton polygons. A J Jong, F Oort, J. Amer. Math. Soc. 13A. J. de Jong and F. Oort, Purity of the stratification by Newton polygons, J. Amer. Math. Soc., 13 (2000), 209-241. A Harder-Narasimhan theory for Kisin modules. C W Erickson, B Levin, to appear in Alg. GeometryC. W. Erickson and B. Levin, A Harder-Narasimhan theory for Kisin modules, to appear in Alg. Geometry. J.-M Fontaine, Y Ouyang, Theory of p-adic Galois Representations, preprint. J.-M. Fontaine and Y. Ouyang. Theory of p-adic Galois Representations, preprint, http://staff.ustc.edu.cn/ yiouyang/research.html On a conjecture of Kottwitz and Rapoport. Q Gashi, Ann. Sci.École Norm. Sup. 4Q. Gashi, On a conjecture of Kottwitz and Rapoport, Ann. Sci.École Norm. Sup. (4) 43 (2010), 1017-1038. A modularity lifting theorem for weight two Hilbert modular forms. T Gee, Math. Res. Lett. 13T. Gee, A modularity lifting theorem for weight two Hilbert modular forms, Math. Res. Lett. 13 (2206), 805-811. Dimensions of some affine Deligne-Lusztig varieties. U Görtz, T Haines, R Kottwitz, D Reuman, Ann. Sci. Ecole Norm. Sup. 39U. Görtz, T. Haines, R. Kottwitz and D. Reuman, Dimensions of some affine Deligne-Lusztig varieties, Ann. Sci. Ecole Norm. Sup. 39 (2006), 467C511. P-alcoves and nonemptiness of affine Deligne-Lusztig varieties. U Görtz, X He, S Nie, Ann. Sci.École Norm. Sup. 48U. Görtz, X. He and S. Nie, P-alcoves and nonemptiness of affine Deligne-Lusztig varieties, Ann. Sci.École Norm. Sup. 48 (2015), 647-665. The dimension of affine Deligne-Lusztig varieties in the affine Grassmannian. P Hamacher, Int. Math. Res. Not. P. Hamacher, The dimension of affine Deligne-Lusztig varieties in the affine Grass- mannian, Int. Math. Res. Not. 2015 (2015), 12804-12839. Irreducible components of minuscule affine Deligne-Lusztig varieties. P Hamacher, E Viehmann, Algebra Number Theory. 127P. Hamacher and E. Viehmann, Irreducible components of minuscule affine Deligne- Lusztig varieties, Algebra Number Theory 12 (2018), no. 7, 1611-1634. Connectedness of Kisin's varieties for GL 2. E Hellmann, Adv. in Math. 228E. Hellmann, Connectedness of Kisin's varieties for GL 2 , Adv. in Math. 228 (2011), 219-240. Geometric and homological properties of affine Deligne-Lusztig varieties. X He, Ann. of Math. 179X. He, Geometric and homological properties of affine Deligne-Lusztig varieties, Ann. of Math. 179 (2014), 367-404. X He, R Zhou, arXiv:1610.06879On the connected components of affine Deligne-Lusztig varieties. X. He and R. Zhou, On the connected components of affine Deligne-Lusztig vari- eties, arXiv:1610.06879. On the connected components of moduli spaces of finite flat modules. N Imai, Amer. J. Math. 132N. Imai, On the connected components of moduli spaces of finite flat modules, Amer. J. Math. 132 (2010), 1189-1204. M Kisin, Crystalline representations and F -crystals, Algebraic geometry and number theory. 253M. Kisin, Crystalline representations and F -crystals, Algebraic geometry and num- ber theory, 253 (2006), 459-496. Moduli of finite flat group schemes, and modularity. M Kisin, Ann. of Math. 170M. Kisin, Moduli of finite flat group schemes, and modularity, Ann. of Math., 170 (2009), 1085-1180. On the existence of F-crystals. R Kottwitz, M Rapoport, Comment. Math. Helv. 781R. Kottwitz and M. Rapoport, On the existence of F-crystals, Comment. Math. Helv. 78 (2003), no.1, 153-184. G-valued crystalline representations with minuscule p-adic Hodge type. B Levin, Algebra and Number Theory. 9B. Levin G-valued crystalline representations with minuscule p-adic Hodge type, Algebra and Number Theory 9 (2015), 1741-1792. A converse to Mazurs inequality for split classical groups. C Lucarelli, J. Inst. Math. Jussieu. 32C. Lucarelli, A converse to Mazurs inequality for split classical groups, J. Inst. Math. Jussieu 3 (2004), no. 2, 165-183. Sian Connected components of closed affine Deligne-Lusztig varieties in affine Grassmannians. S Nie, Amer. J. Math. 140S. Nie, Sian Connected components of closed affine Deligne-Lusztig varieties in affine Grassmannians, Amer. J. Math. 140 (2018), 1357-1397. S Nie, arXiv:1809.03683Irreducible components of affine Deligne-Lusztig varieties. S. Nie, Irreducible components of affine Deligne-Lusztig varieties, arXiv:1809.03683. Φ-modules and coefficient spaces. G Pappa, M Rapoport, Moscow Mathematical Journal. 9G. Pappa, and M. Rapoport, Φ-modules and coefficient spaces, Moscow Mathemat- ical Journal, 9(2009), 625-663. On a variation of Mazur's deformation functor. R Ramakrishna, Compositio Math. 87R. Ramakrishna,On a variation of Mazur's deformation functor, Compositio Math. 87 (1993), 269C286. On the classification and specialization of F-isocrystals with additional structure. M Rapoport, M Richartz, Compositio Math. 103M. Rapoport and M. Richartz, On the classification and specialization of F-isocrys- tals with additional structure, Compositio Math. 103 (1996), 153-181. M Raynaud, Schémas en groupes de type. Bull. Soc. Math. France102M. Raynaud,Schémas en groupes de type (p, · · · , p), Bull. Soc. Math. France, 102 (1974), 241-280. The dimension of some affine Deligne-Lusztig varieties. E Viehmann, Ann. Sci. Ecole Norm. Sup. 39E. Viehmann, The dimension of some affine Deligne-Lusztig varieties, Ann. Sci. Ecole Norm. Sup. 39 (2006), 513-526. Connected components of closed affine Deligne-Lusztig varieties. E Viehmann, Math. Ann. 340E. Viehmann, Connected components of closed affine Deligne-Lusztig varieties, Math. Ann. 340 (2008), 315-333. Le corps des normes de certaines extensions infinies de corps locaux; applications. J.-P Wintenberger, Ann. Sci.École Norm. Sup. 16J.-P. Wintenberger, Le corps des normes de certaines extensions infinies de corps locaux; applications, Ann. Sci.École Norm. Sup., 16 (1983), 59-89. L Xiao, X Zhu, arXiv:1707.05700Cycles on Shimura varieties via geometric Satake. L. Xiao and X. Zhu, Cycles on Shimura varieties via geometric Satake, arXiv:1707.05700. ZhuTwisted orbital integrals and irreducible components of affine Deligne-Lusztig varieties. R Zhou, Y , Camb. J. Math. 81R. Zhou and Y. ZhuTwisted orbital integrals and irreducible components of affine Deligne-Lusztig varieties, Camb. J. Math., 8 (2020), No. 1, 149-241. Affine Grassmannians and the geometric Satake in mixed characteristic. X Zhu, Ann. of Math. 185X. Zhu, Affine Grassmannians and the geometric Satake in mixed characteristic, Ann. of Math., 185 (2017), 403-492.
[]
[ "Finite Blocklength Communications in Smart Grids for Dynamic Spectrum Access and Locally Licensed Scenarios", "Finite Blocklength Communications in Smart Grids for Dynamic Spectrum Access and Locally Licensed Scenarios" ]
[ "Iran Ramezanipour ", "Parisa Nouri ", "Hirley Alves ", "Pedro J H Nardelli ", "Richard Demo Souza ", "Ari Pouttu " ]
[]
[]
This work focuses on the performance analysis of short blocklength communication with application in smart grids. We use stochastic geometry to compute in closed form the success probability of a typical message transmission as a function of its size (i.e. blocklength), the number of information bits and the density of interferers. Two different scenarios are investigated: (i) dynamic spectrum access where the licensed and unlicensed users, share the uplink channel frequency band and (ii) local licensing approach using the so called micro operator, which holds an exclusive license of its own. Approximated outage probability expression is derived for the dynamic spectrum access scenario, while a closed-form solution is attained for the micro-operator. The analysis also incorporates the use of retransmissions when messages are detected in error. Our numerical results show how reliability and delay are related in either scenarios.
10.1109/jsen.2018.2835571
[ "https://arxiv.org/pdf/1805.09549v1.pdf" ]
43,940,101
1805.09549
5547869d0380e9e4c6103ba638d0e864bf9393b8
Finite Blocklength Communications in Smart Grids for Dynamic Spectrum Access and Locally Licensed Scenarios Iran Ramezanipour Parisa Nouri Hirley Alves Pedro J H Nardelli Richard Demo Souza Ari Pouttu Finite Blocklength Communications in Smart Grids for Dynamic Spectrum Access and Locally Licensed Scenarios 1Index Terms-Machine-to-machinemicro-operatorshort mes- sagedynamic spectrum accesslocal licensingultra-reliabilityPoisson Point Process This work focuses on the performance analysis of short blocklength communication with application in smart grids. We use stochastic geometry to compute in closed form the success probability of a typical message transmission as a function of its size (i.e. blocklength), the number of information bits and the density of interferers. Two different scenarios are investigated: (i) dynamic spectrum access where the licensed and unlicensed users, share the uplink channel frequency band and (ii) local licensing approach using the so called micro operator, which holds an exclusive license of its own. Approximated outage probability expression is derived for the dynamic spectrum access scenario, while a closed-form solution is attained for the micro-operator. The analysis also incorporates the use of retransmissions when messages are detected in error. Our numerical results show how reliability and delay are related in either scenarios. Finite Blocklength Communications in Smart Grids for Dynamic Spectrum Access and Locally Licensed Scenarios Iran Ramezanipour, Parisa Nouri, Hirley Alves, Pedro J. H. Nardelli, Richard Demo Souza and Ari Pouttu Abstract-This work focuses on the performance analysis of short blocklength communication with application in smart grids. We use stochastic geometry to compute in closed form the success probability of a typical message transmission as a function of its size (i.e. blocklength), the number of information bits and the density of interferers. Two different scenarios are investigated: (i) dynamic spectrum access where the licensed and unlicensed users, share the uplink channel frequency band and (ii) local licensing approach using the so called micro operator, which holds an exclusive license of its own. Approximated outage probability expression is derived for the dynamic spectrum access scenario, while a closed-form solution is attained for the micro-operator. The analysis also incorporates the use of retransmissions when messages are detected in error. Our numerical results show how reliability and delay are related in either scenarios. Index Terms-Machine-to-machine, micro-operator, short message, dynamic spectrum access, local licensing, ultra-reliability, Poisson Point Process I. INTRODUCTION Wireless networks have become an indispensable part of our daily life through a wide range of applications. For instance, we may now remotely monitor and control different processes within our homes, workplaces or even at industrial environments. In the upcoming years the advances in wireless communications shall be even more seamless and will provide connectivity through the so called Internet of Things (IoT) [1]. It is envisioned that by the year 2020, billions of devices (including sensors and actuators) will be connected to the Internet, gathering all kinds of data and generating a huge economic impact [2], [3]. One of the key enablers of this future is the so-called machine-type communication (MTC) where a large number of devices will perform sensing, monitoring, actuation and control tasks with minimal or even no human intervention [1]. In other words, MTC -also known as machine-to-machine (M2M) communication -incorporates sensors, appliances and vehicles and this is expected to lead to a decrease of humancentric connections [4]. MTC is also one of the cornerstones of the upcoming 5G communication technologies. As discussed before, it contemplates a massive deployment of devices communicating with diverse range of requirements in terms of reliability, latency and data rates [5]- [8] . For example, [9, Table. 2] lists the main requirements and features for different use cases of MTC over cellular networks. Intelligent transportation systems, industrial automation and smart grids have already been deployed using the IoT concept, which is also one of the main driving technologies of 5G [10], [11]. It should be noted that the mentioned applications have different reliability requirements. For instance, some smart grid applications requires a reliability as low as 10´6, which in their turn is less strict if compared to some industrial automation applications [10]. A. Dynamic Spectrum Access and Locally Licensed in 5G One of the main features of 5G will be its capability of connecting very large number of devices in different locations, while being able to serve case specific needs of different applications. Indoor networks are responsible for the larger part of the mobile traffic, hence, it is essential to build new, more efficient indoor small cell networks. This will require more spectrum, which makes its availability a big challenge to tackle. There are generally two ways to access the available spectrum in a network which are [12]: (i) Individual Authorization (Licensed), and (ii) General Authorization (Unlicensed). There are five different allocations scenarios associated with the previously mentioned access schemes, namely dedicated licensed spectrum, limited spectrum pool, mutual renting, vertical sharing and unlicensed horizontal sharing. Cognitive radio has gained a high popularity during the past few years since it makes a more efficient use of the frequency spectrum possible [13], [14]. Dynamic spectrum access is one of the many interesting aspects of cognitive radios [15] where the unlicensed-users can use the same frequency band as the licensed-users while not affecting their transmission. For that sake, the unlicensed-users evaluate the spectrum usage and then transmit if the channel is free, otherwise, they postpone their transmission or use other frequency bands [16], [14]. Dynamic spectrum access can also be implemented in smart grid communication networks. In [17], [18], the use of dynamic spectrum access in smart grid communication networks is evaluated and its suitability is positively assessed. Another interesting approach towards reaching spectral efficiency is using the non-orthogonal cognitive radio techniques. Despite being relatively new, many valuable research works have been done in this field. In [19], authors develop a cognitive radio scheme for multicarrier wireless sensor networks by studying a dense wireless sensor network model where the sensors can opportunistically use the primary users' spectrum for their transmissions. Unlike the previous case, this model does not require the sensor nodes to sense the channel before transmission which is useful in terms of maintaining the limited resources of the sensors. Authors in [20] propose a number of new interference control and power allocation methods for cognitive radios which sheds light on the primary and secondary users' power allocation requirements. An interesting work has been done in [21] where a new spectrum sharing model is proposed for multicarrier cognitive radio systems in which the secondary users can simultaneously use the primary users' frequency band for their transmission while actually improving the primary users' transmission by the mean of convolutive superposition. Moreover, in [22] and [23], two different spectrum access protocols are proposed for the secondary networks via controlled amplify-and-forward relaying and cooperative decodeand-forward relaying. These protocols are proven to not have a negative impact on the rate and outage probability of the primary network. Another interesting work while using amplify-and-forward scheme has also been done in [24] where the proposed model makes it possible for secondary users to use the primary users' frequency channel for their transmissions even when the primary network is active. By using this model, it is possible to improve the secondary user's packet delay and primary users' achievable rate. A useful model for improving the secondary network's achievable rate is also introduced in [25] where the authors achieve this goal by applying superposition coding to a collaborative spectrum sharing scenario. While all the aforementioned spectrum sharing and dynamic spectrum access models shall be a part of the future wireless communications, having exclusively licensed spectrum (locally licensed) is crucial for 5G to be able to meet some Quality of Service (QoS) requirements [12]. For instance, the micro-operator (µO) concept has been introduced as a mean for local service delivery in 5G which will benefit from having exclusively licensed spectrum. µOs make the previously mentioned case-specific services in the future indoor small cell networks possible [26], [12]. Micro-operators have their own specific infrastructure which enables them to handle different kinds of Mobile Network Operator (MNO) users while also collaborating with the network infrastructure vendors, facility users/owners, utility service companies and regulators. µOs shall help the MNOs specially in the areas that the traffic demand is high by offering them indoor capacity. It should be noted that the functionality of a µO depends on the available spectrum resources, which are limited. As mentioned earlier, having exclusive licensed spectrum is very important for the success of 5G so the µOs are the entities in a network that can benefit from it since regulators are able to issue local spectrum licenses for their own usage within a specific location [12], [26]. While we acknowledge all the valuable works mentioned earlier that have been done in the area of spectrum sharing which are also relevant for smart grid applications, it is important to mention that, in this study, our focus is to analyze the performance of the two dynamic spectrum access and locally licensed models. To do so, next we review the reliability requirements for smart grid applications and recent works in the MTC area. B. Reliability in Smart Grids and the Role of MTC Communication systems have been traditionally studied using the notion of channel capacity that assumes very large (infinite) blocklength [27]- [29]; this is a reasonable benchmark for practical systems with blocklength in the order of thousands of bits. MTC, however, often uses short messages which is not currently supported by the wireless networks and periodic data traffic, coming from a massive number of devices. The same assumptions in terms of channel capacity cannot be directly applied to short blocklength messages as pointed out in [27], [30]. This imposes the need for a new paradigm on the network design and analysis architecture to support such amount of connected devices with their heterogeneous requirements [3]. New information theoretic results have been presented to evaluate the performance of short blocklength systems, from point-to-point additive white Gaussian noise (AWGN) links up to Multiple-Input Multiple-Output (MIMO) fading channels [30]. In [31], [32], the authors investigate retransmission methods and interference networks in this context. Nevertheless, network level analysis is still missing in the literature, except for [33], where the authors utilize Poisson Point Process (PPP) to characterize the network deployment and interference, considering finite block codes in a cellular network context. Smart grids characterize the modernization process that the power grids undergo and is one important case for MTC [34], [35]. Communication systems are one the most important parts of the smart grids and different communication technologies are currently being used in smart grids most of which use the existing communication technologies such as PLC, fiber optical communications and LTE [36]. However, considering the rapid advancements of the communication systems towards 5G, smart grid communication systems should also be designed in a way that would be compatible with the newest telecommunication technology requirements which would not be really possible by using the traditional communication systems anymore. Therefore, smart grids are an interesting topic to be studied under the umbrella of 5G, especially considering that the requirements imposed by smart grids have hitherto overlooked specially with respect to massive connectivity and ultra reliable low latency communication [11]. Hence, motivated by smart grids stringent requirements [11], [37], here we focus on two different scenarios looking at the ultra-reliability using finite blocklength (short messages) in order to reduce latency and capture practical aspects regarding the message size, which is one of the novel aspects of 5G. The reliability requirement of a smart grid network varies from one application to another [37]. For instance, applications such as remote meter reading have less strict reliability require-ments (98%) while high-voltage grids require high reliability (more than 99.9%) in addition to low latency. Moreover, applications such as teleprotection in smart grid networks also require very high reliable data transmission between the power grid substations within a very short period of time, in the order of few milliseconds. Smart girds also may need to have real time monitoring and control and should be able to react immediately to the changes in the network which means there is going to be a need for ultra reliable communications with 99.999%´99.99999% reliability level and a low latency, around 0.5´8ms [11], [38], [39]. In this paper, we show that it is possible to achieve these different and strict requirements with our proposed models using finite blocklength communications which require a completely different design settings compared to what is currently being used in for instance, LTE or WiFi [30] . C. Related Work Ultra reliable communications and finite blocklength have become popular topics and many studies have been done in this field, however, there are still many issues that need to be addressed. For instance, [27], [40], [41], being amongst the first fundamental works in the finite blocklength area, where they set the foundation of the finite blocklength communications for cases such as block fading, MIMO and AWGN, also open up a variety of topics that need to be tackled. In [30], the necessity of studying and employing short packet communications is explained and is foreseen as one of the main enablers of the future telecommunication technologies. The authors bring into light the recent achievements in the field of short message transmission while also emphasizing the need for more research to be done on several open challenges. Valuable works have been done in [31], [32] where authors use the finite blocklength notion to analyze the throughput of different wireless networks. In [33], a model similar to ours is investigated where they also use PPP to characterize the cellular networks and evaluate the outage and throughput of the network in which a base station is connected to its nearest neighbor. However, this is not the case in our model. In the models studied in this paper, we use PPP where users are at a fixed distance, we use a different characterization of the signal-to-interference-plus-noise ratio (SINR) distribution and constraints which have led to totally different analysis and results. Our focus is not on a single link communication, hence, the SINR in this case captures interference, and to some extend, the network dynamics as well. We provide a general approximation for the outage probability which does not rely on any specific distribution of the SINR. Here, we focus on massive connectivity constrained by reliability which are imposed on the network from the application at hand (smart grids). We show that it is possible to achieve reliable and ultra reliable communication using finite blocklength which is also a characteristic of smart metering transmissions. It is shown how important it is to know how reliability and latency are affected by the increasing number of interferers. We also propose two different schemes in order to overcome this problem, namely dynamic spectrum access and local licensing scenarios, that can be used based on the restrictions imposed by the application. D. Contributions The followings are the main contributions of this paper. ‚ The general expression of the outage probability as a function of the number of information bits, blocklength and density of interferers in closed-form in the finite blocklength regime. ‚ Closed-form approximations of the outage probability are derived for both the dynamic spectrum access and locally licensed scenarios, under different conditions in the finite blocklength regime. ‚ Two different schemes are evaluated to be used in different smart grid applications based on network density and reliability requirements, considering the finite blocklength regime. We show that these schemes can not be used in every network model, since they have different requirements, while we also show when it is most suitable to use which. ‚ A general expression for the delay is proposed where the effect of retransmissions is investigated. Table I summarizes the functions and symbols that are used in this paper. The rest of the paper is divided as follows. Section II introduces the network model with how the communication model using the short blocklength is modeled, while Section III details the outage analysis for both of the proposed scenarios. Section IV presents the numerical results and how the two scenarios can reach the reliability requirements of smart grids, and how retransmissions affect the reliability and latency. Section V concludes this paper. Notation: The probability density function (PDF) and the cumulative distribution function (CDF) of a random vari-able (RV) T are denoted as f T ptq and F T ptq, respectively, while its expectation is E T r¨s. II. NETWORK MODEL The conventional methods for evaluating communication networks are not usually a suitable choice when studying large wireless networks due to several reasons such as focusing on signal to noise ratio (SNR) rather than signal to interference plus noise ratio (SINR) or the fact that the interference in these kinds of networks depends on the path loss, meaning that it also depends on the network geometry. However, using stochastic geometry and Poisson point process to model large wireless networks have proven to be a useful tool in solving the challenges faced by the classical methods [43]. Hence, in this paper, we assume a dense network where the position of the interferers is modeled as a Poisson point process [33]. Formally, we are dealing with a Poisson field of interferers [44] where the distribution of nodes that are causing interference follows a 2-dimensional Poisson point processΦ with density λ ą 0 (average number of nodes per m 2 ) [43] over an infinite plane [14] [45]. This process is represented asΦ " pX, Hq, where X is the set of interferers' locations and H represents the set of quasi-static channel fading coefficients in relation to the reference receiver located arbitrarily at the origin [14], [43]. Notice that from Slivnyak theorem [43] an arbitrarilylocated receiver is placed at the center of the Euclidean space and is used as a fixed point of reference which makes the estimation of the position of the surrounding elements possible [43]. Note that x i P X denotes a position in the 2-dimensional plane and i P N`. Besides, h i P H is assumed to be constant during the transmission of one block, which takes n channel uses, and during a spatial realization of the point process. We assume the fading coefficients follow a Rayleigh distribution, so that h 2 i is exponentially distributed h 2 i " Expp1q. The fading coefficient h 0 is associated with the reference link, composed by a transmitter located at distance d from its associated receiver. It should be noted that since here we are using unbounded path loss model, α ą 2 [43, Ch. 5]. A. Communication Model Signal propagation is modeled using large-scale distancebased path-loss and Rayleigh fading. The received power at the reference receiver from the interferer i is given by I i " W p h 2 i |x i |´α,(1) where W p is the transmit power, α ą 2 is the path-loss exponent and |x i | is the distance between the node x i and the reference receiver. It is important to note that W p is related to the interferers' transmit power; the same equation is valid for the reference link 0, which may have a different transmit power denoted by W s . Then: I 0 " W s h 2 0 d´α. The SINR [46] is defined as the random variable Z fi Under these conditions the SINR cumulative distribution function (CDF) 1 is given as [46,Corollary 1] F Z pz|α, λ, ζ, ξq " 1´Pr rZ ą zs " 1´exp´´ζλz 2 α´ξ z¯, (2) where ζ " κπd 2 p Wp Ws q 2 α , ξ " ηd α Ws , and κ " Γ`1`2 α˘Γ`1´2 α˘. The probability density function (PDF) is then f Z pz|α, λ, ζ, ξq "ˆ2 λζ α z 2 α´1`ξ˙e xp´´ζλz 2 α´ξ z¯. (3) B. Short Blocklength Messages Following [30], [41], we define the encoding/decoding procedures as follows. First, the encoder maps k information bits B " tB 1 , ..., B k u into a codeword with n symbols S " tS 1 , ..., S n u, satisfying the power constraint 1 n ř n m"1 |S m | 2 ď W s . Then, S is transmitted through the wireless channel generating T " tT 1 , ..., T n u as the output. Finally, the decoder makes an estimate about the information bits based on T, namelyB, satisfying a maximum error probability constraint . Then, we denote R˚pn, q in bits per channel use (bpcu) as the maximal coding rate at finite blocklength (FB) which renders the largest rate k n where k is the number of information bits and n denotes the blocklength, whose error probability does not exceed [40]. Then, under quasi-static conditions R˚pn, q can be tightly approximated by [41] R˚pn, q « sup tR : Pr rlog 2 p1`Zq ă Rs ă u . For codes of R " k n bpcu, the outage probability in quasistatic fading is approximated as [41] " E Z « Q˜?n log 2 p1`Zq´R a V pZq¸ff ,(5) where V pZq "`1´p1`Zq´2˘plog 2 eq 2 is the channel dispersion and measures the stochastic variability of the channel relative to a deterministic one with the same capacity [27]. The above outage function can also be expressed as PrrSINR ă γ th s, where γ th is the SI(N)R threshold of the receiver which is determined by the channel capacity and is the minimum SI(N)R which is needed in order to have a successful link connection, then the reliability can be defined as 1´PrrSINR ă γ th s. In other words, an outage event occurs when a transmitted message is not successfully decoded by the receiver. III. OUTAGE ANALYSIS In this section we focus on the outage probability of the network described in Section II. The analysis is done for two different scenarios, which are special cases of the general outage expression to be presented first. The outage probability in (5) is intricate to be evaluated in closed-form, especially when considering a general SINR distribution as in (3). Therefore, we resort to a tight approximation of (5) Fig. 1. An illustration of the dynamic spectrum access scenario, where licensed and unlicensed users share the up-link channel. The reference smart meter (unlicensed transmitter) is depicted by the house, the aggregator (unlicensed receiver) by the CPU and its antenna, the handsets are the mobile licensed users (interferers to the aggregator) and the big antenna is the cellular base-station. As the smart meter uses directional antennas with limited transmit power (bold arrow), its interference towards the base-station can be ignored. The thin black arrows represent the licensed users' desired signal, while the red ones represent their interference towards the aggregator. before evaluating it in closed-form for the scenarios under investigation in this work. Proposition 1. Given the network described in Section II, the outage probability of the reference link (the link between the reference receiver and its respective transmitter [47]) is well approximated as ap " F Z pϑq`F Z p q 2`β θ pF Z pϑq´F Z p qq ? 2π´ϑ ż βz ? 2π f Z pzqdz,(6) where β " a n 2π p2 2R´1 q´1 2 , θ " 2 R´1 , ϑ " θ`a π 2 β´2 and " θ´a π 2 β´2. Proof. See Appendix A. Note that (6) covers a wide range of scenarios with path-loss exponent α ą 2. It is worth mentioning that only one integral remains in (6), and the overall expression is composed of well known functions, which facilitates its integration by numerical methods compared with the original expression in (5). A. Dynamic Spectrum Access Interference-Limited Scenario We consider a dynamic spectrum access model, shown in Fig. 1, where licensed and unlicensed users share the frequency bands allocated to the uplink channel [48]. We assume an interference-limited scenario where the licensed and unlicensed transmission powers are respectively W p and W s . In this case, the noise power is negligible with respect to the aggregated interference [14]. The licensed users are mobile users communicating with a cellular base-station while the unlicensed users are the smart-meters that send data to their corresponding aggregator. The interference in this model is generated by mobile users with respect to the aggregator (reference receiver), as discussed in [14]. Proposition 2. Assuming the network deployment described in Section II and the reference scenario in section III-A, in an interference limited scenario where ξ « 0 , the outage probability is SS " βpϑ´ q ? 2π`α β pζλq α 2 2 ? 2π " Γˆα 2 , ζλϑ 2 α˙´Γˆα 2 , ζλ 2 α˙ .(7) Proof. See Appendix B. Corollary 1. For the special case of interference-limited dense urban scenarios, where α " 4 is a good approximation for the path loss exponent, the outage probability in (7) reduces to SSα"4 " βpϑ´ q ? 2π`2 β ? 2πpζλq 2 " e´ζ λ ? ϑˆζ λϑ ? ϑ`1˙´e´ζ λ ? ˆζ λ ? `1˙.(8) The effect of different transmit powers W s on the outage probability as a function of the network density is shown in Fig. 2. We can see that, as the transmit power increases, the link can reach a higher reliability level in denser networks even with a high rate. It should be noted that this model is suitable for applications that do not require so strict reliability levels, such as smart meter reading. As the noise level is negligible compared with the interference, the density of interfering nodes shall be high. The mobile users -the source of interference in this case -transmit with a higher power compared with the smart meters. B. Locally Licensed Scenario In this case, we analyze a locally licensed scenario, using the µO concept, where the previous unlicensed link is now also a licensed user in the system in a specific geographical region. Although this concept guarantees the exclusive usage of the frequency band, the environment is still unfriendly: there are different entities that may cause interference such as base stations, smart meters and mobile users. Therefore, instead of assuming no interference, we consider a point process with low density λ, leading to low interference levels. However, unlike the previous case, the noise level is not negligible in this case anymore and it will affect the reliability of the system. The outage probability is given next. Proposition 3. For the network model described in Section II and the characteristics of the µO scenario described in Section III-B, the outage probability is given in (9) on top of page 8 where α " 4, hence, (2) is denoted by F Z pϑ|ξq . Proof. See Appendix C. The µO scenario cannot be assumed interference-limited; on the contrary, the noise power here is the major factor in the SINR. To compute the numerical results, we assume here W p " W s " 1 and ξ " 0.001. Recall that in the interference limited scenario, when the interference is low, we can achieve a low outage probability even with a high network density for a given coding rate. Increasing the smart meters transmit power results in having a higher success probability. As the Fig. 3. An illustration of the locally licensed scenario, where the micro operator holds an exclusive license for its own usage in a specific geographical region (isolated area). There is interference caused by the entities outside of this area such as mobile phones and Wi-Fi. However, since the level of interference in this area is very low, noise is what is going to harm the communications in this model. The reference smart meter is depicted by the house, the micro-operator by the CPU and its antenna, the handsets are the mobile licensed users (interferers to the aggregator) and the big antenna is the cellular base-station. The thin black arrows represent the users' desired signal, while the red ones represent the interference coming from outside of the area. interference power increases, a lower coding rate is needed to have the same outage probability with the same density. In terms of outage, the µO scenario, in its turn, behaves similarly, meaning that with increasing the network density, the outage probability increases, however, the outage probability is generally lower in this model as the interference level is negligible and noise is the main factor affecting the performance of the network. The operating regions for the dynamic spectrum access and the µO scenarios are shown in Figs. 4 and 5, respectively. The presented outage levels are chosen based on the fact that the dynamic spectrum access model is suitable for the non-critical applications, which we illustrate by reliabilities between 98% to 99%, which is relatively high for some applications (for smart grid application, refer to [37, Table.3], and will be further discussed in the next section). On the other hand, when we assess the µO, one can see a very high reliability with error probability as low as 0.1% (i.e. reliability ě 99.9% ). In other words, µO is suitable for critical applications with high reliability requirements. Thus, the operating region for this model is presented as the area where uo ď 10´3. C. On the Accuracy of (6) - (9) As it was mentioned earlier in this section, we use an approximation of (5) for calculating the outage probability in (6) which is then used to derive the closed form equations of the outage probability for different scenarios presented in (7)- (9). In this section, the approximation is compared to the exact equation for both the interference and noise limited scenarios, as shown in Figs. 6 and 7 respectively. We can see that the results from the approximated and closed form equations are almost always equal to the exact equation (5). Considering the error metric below ∆ " | ´ ap |,(10) we can see that ∆ is almost always either zero or very close to zero. It is only for the case of the locally licensed scenario that we can see, for a very low values of λ, a difference of at most 4% between (5) and the approximation which is still a very low difference. A more elaborated analysis of the error metric can be found in [49]. IV. MEETING THE SMART GRID REQUIREMENTS This section focuses on the specific requirements for different smart grid applications. Specifically, we analyze the impact of blocklength, retransmission attempts, and network density on the outage probability of the two proposed scenarios. 1) Dynamic Spectrum Access Scenario: This scenario is suitable for applications like smart meters periodic transmissions [37]. The frequency of the transmissions might be relatively high over one day [48]: the smart meters transmit data every 15 minutes during a period of 24 hours which means smart meters need to transmit data 96 times per day. The properties explained above can be seen in Fig.8-a where the behavior of λp SS |α " 4q is shown. With increasing λ, the outage probability also increases due to the higher interference level. Nevertheless, we can still achieve our desired outage probability, even in denser networks. We can confirm that as the unlicensed users transmit power increases, we can achieve better reliability in denser networks. It was mentioned earlier that with finite blocklength, we cannot achieve the ultra-reliable (UR) region with the dynamic spectrum access model. Consequently, this approach shall be used in applications with looser requirements (98%´99%). In Fig.9-a we investigate the effect of the finite blocklength (n) on the outage probability for different information bit sizes (k). where we can see that, for 100 ď n ď 1000, ď 10´3 cannot be achieved. For reaching the UR region, the blocklength would have to be increased to very large numbers. 2) Locally Licensed Scenario: µO can achieve higher levels of reliability, meeting the requirements of more critical applications like fault detection. For more examples about the reliability and data size requirements for different smart grid applications, refer to [37,Table.3]. The µO approach is subjected to a lower interferers' density when compared to the dynamic spectrum access, as shown in Fig.8-b. When the noise level is low, the outage probability is also low. It is illustrated in this figure that for λ ă 10´3, this model can operate in the UR region where reliability level is at least 99.9%. Fig.9-b shows that ultra-reliability can be achieved with short blocklength in the µO scenario. By increasing k, the required blocklength for keeping the link in the UR region also increases; in any case, ď 10´3 is still achievable for relatively high k when 100 ď n ď 1000. A. Retransmission Attempts Two basic strategies are normally employed to cope with transmission errors in communication systems, namely automatic repeat request (ARQ) and forward error correction (FEC) [50]. In this paper, we only consider ARQ. ARQ consists of an acknowledgment (ACK) or negative acknowledgment (NACK) messages to be sent by the receiver to inform whether the intended message has been successfully decoded. If the transmission was not successful, a retransmission is requested and the retransmission continues until the codeword is decoded successfully or the allowed maximum number of retransmission is reached [31]. This strategy has some drawbacks such as loss of throughput, which are studied in [51], [52] (without considering the high reliability or low latency, though). Following [53], we study the effect of the number of transmission attempts for a given message on the outage probability¯ , including at most m transmission attempts assuming Type-I HARQ is [54]: " pn, λq m ,(11) where is the outage probability in (5). The effect of increasing the number of transmission attempts on both scenarios is shown in Fig. 10. It can be seen that as m increases, the reliability is enhanced. Comparing the outage probabilities of when only one or up to two transmission attempts are allowed, we can see that for the same λ, a much lower outage probability can be achieved. Considering the outage curve for dynamic spectrum access and µO, at some point at very low interferers' densities, the two curves cross each other. This is due to the fact that the network density becomes very low from that point onward. Therefore, the interference power becomes lower and the dynamic spectrum access model starts to have a lower outage than the µO scenario. However, it is important to remember that the dynamic spectrum access model is designed to be used in denser networks; so the fact that its outage probability becomes less than the µO scenario for networks with very low densities shall be neglected since it contradicts the basic assumption of a interference-limited network. While retransmissions increase the reliability of the network, it also increases the latency, which is in fact another important aspect of MTC. As 5G and MTC have strict (6) and (7) compared to (5) for the Dynamic spectrum access scenario outage probability as a function of the density λ, considering d " 1, η " 1, n " 500, and R " 0.1. (6) and (9) compared to (5) for the locally licensed scenario outage probability as a function of the density λ, considering d " 1, n " 500, and R " 0.1. uo " r1´F Z pϑ|ξqsp´1 2´β θ ? 2π q`r1´F Z p |ξqsp´1 2`β θ ? 2π q´r 1´F Z p |ξqsr1´F Z pϑ|ξqs 2 ? 2πξ 3 2 " 2β a ξppF´1 Z pϑ|ξqp1`ξ q´pF´1 Z p |ξqp1`ξϑqq`exppζλp ? `?ϑq`ξp `ϑq`ζ 2 λ 2 4ξ q ? πβζλ " erfˆζ λ 2 ? ξ`a ξ˙´erfˆζ λ 2 ? ξ`a ϑξ˙  .(9) requirements in terms of latency, we should also consider this metric in our analysis by limiting the maximum number of transmission attempts.. In this case, the total delay is calculated as δ " n`ν`m ÿ j"1 pn j`νj q,(12) where ν denotes the number of channel uses for ACK/NACK messages that have been sent and n is the number of channel uses. The average delay expression is then δ " 1 m m ÿ j"1 pn, λq.(13) The number of channel uses and symbol time are the determining factors when dealing with delay. The symbol time (T s ) of LTE (long term evolution) is T s « 66.7µs [55] and the current latency requirements of different smart grid applications are described in [37, Table.3]. However, 5G is going to benefit from ultra low latency compared to LTE. Hence, the smart grids are also going to have ultra low latency which is expected to be 3ms to 5ms [56]. Considering that the symbol time is going to be T s " 1 120k « 8.3µs, for n " 200 and m " 1, δ " 1.66ms`ν. As the number of allowed transmission attempts increases, the delay also increases. If m " 2, then δ " 3.32ms`ν. We can see that increasing m results in increasing the delay and this increase will not be linear since at each transmission there is also a feedback message sent every time. Thus, it is very important to limit the number of transmission attempts to avoid increasing the latency of the system. V. DISCUSSION AND FINAL REMARKS This paper evaluates the possibility of meeting the reliability requirements of different smart grid applications by using FB in two different system models, (i) dynamic spectrum access scenario suitable for applications with loose reliability requirements (98%´99%), and (ii) µO scenario suitable for applications with strict reliability requirements (more than 99%). Our results show that it is possible to meet the expected reliability levels and even reach the UR region for smart grids while having FB. It is shown that several factors such as network density, coding rate and interference and noise level affect the outage probability of the system, hence, they should be taken into consideration when choosing a suitable model considering the required reliability level of a specific application. Studying the ultra reliability and delay opens up a wide range of research opportunities in the smart grid communication systems. APPENDIX A PROOF OF PROPOSITION 1 As pointed out in [31], the function Qpgptqq can be tightly approximated by a linear function for the whole S(I)NR range. Notice that the argument inside the Q-function in (5) is given as gptq " ? n`1´p1`tq´2˘´1 2 log p1`tq, and that gptq is an increasing function of t, but not strictly positive @t P R, which restricts the use of other well known approximations for the Q-function [57]. Then, let Qpgptqq « W ptq be denoted as W ptq " $ ' & ' % 1 t ď 1 2´β ? 2π pt´θq ă t ă ϑ 0 t ě ϑ(14) where θ " 2 R´1 is the solution of gptq " 0, while β " a n 2π p2 2R´1 q´1 2 is the solution for B Qpgptqq Bt | t"θ . Then, the outage probability becomes, " E Z r s " ż 8 0 W pzqf Z pzqdz " ż 0 pf Z p qqdz`ż ϑ ˆ1 2´β ? 2π pz´θq˙f Z pzqdz " F Z pZqˇˇˇˇ 0`ż ϑ ˆ1 2´β ? 2π pz´θq˙f Z pzqdz " F Z p q`1 2 F Z pzqˇˇˇˇ ϑ`β θ ? 2π F Z pzqˇˇˇˇ ϑ´ϑ ż βz ? 2π f Z pzqdz,(15) which after few algebraic manipulations is written as in (6). APPENDIX B PROOF OF PROPOSITION 3 By setting ξ " 0 and replacing it into (2) and (3), while considering F Z pzq " F Z pz|α " 4, ξ " 0q and f Z pzq " f Z pz|α " 4, ξ " 0q, the integral in (6) where the integral can be solved using integration by parts [58, §2.02-5]. Hence, we attain (7) by replacing (16) into (6) Considering F Z pzq " F Z pz|α " 4, ξq and f Z pzq " f Z pz|α " 4, ξq, the integral in (6) (17) into (6), yields (9). , and I " ř iPN`I i and η ą 0 is the AWGN power. Fig. 2 . 2Dynamic spectrum access scenario outage probability as a function of the network density λ and coding rate R, where d " 1 and η " 1. Fig. 4 . 4Dynamic spectrum access scenario outage probability with different transmit power as a function of the network density λ, considering d " 1, η " 1, n " 200, and R " 0.1. Fig. 5 . 5Locally licensed scenario outage probability with different noise levels as a function of the network density λ, considering d " 1, n " 200, and R " 0.1. Fig. 6 . 6Comparison of the accuracy of the approximation used in Fig. 7 . 7Comparison of the accuracy of the approximation used in Fig. 8 .Fig. 9 .Fig. 10 . 8910Network density λ as a function of the outage probability in (a) dynamic spectrum access scenario where d " 1, η " 1, n " 200, and R " 0.1 and (b) locally licensed scenario, considering d " 1, n " 200, R " Blocklength n as a function of the outage probability in (a) dynamic spectrum access scenario, considering d " 1, η " 1, n " 200, R " 0.1, λ " 10´2 andWp Ws " 1.4 and (b) locally licensed scenario, considering d " 1, n " 200, R " 0.1, ξ " 0.001 Watts, λ " Effect of increasing the number of retransmissions on (a) dynamic spectrum access scenario and (b) locally licensed scenario outage probability as a function of the network density λ, considering d " 1, Wp Ws " 1.4 and ξ " 0.001. TABLE I SUMMARY IOF THE FUNCTIONS AND SYMBOLS.Symbol Expression F Z r¨s SINR CDF f Z r¨s SINR PDF E T r¨s Expectation Γ r¨s Gamma Function Q r¨s Gaussian Q-Function V pzq r¨s Channel Dispersion Ts Symbol Timê Φ Poisson Point Process x Set of Interferers' Locations H Channel Fading Coefficient λ Network Density |x i | Distance Between Node x i and the Reference Receiver h 0 Channel Fading Coefficient in the Reference Link Wp Licensed User Transmit Power Ws Unlicensed User Transmit Power α Path Loss Exponent Z SINR I AWGN Power ξ Noise Level k Information Bits n Blocklength R Coding Rate d Distance Between Transmitter and Receiver m Number of Transmission Attempts γ th SINR Threshold ν Number of Channel Uses for ACK/NACK The gamma function is defined as Γptq [42, Ch 6, §6.1.1], and the regularized upper incomplete gamma function is denoted as Γps, tq [42, Ch 6, §6.5.3]. Qp¨q denotes the Gaussian Q-function Qptq "1 ? 2π ş 8 t exp´´u 2 2¯d u " 1 2 Erfc´t ? 2¯[ 42, §.26.2.3]. after few algebraic manipulations and using [58, §2.321-1] and [58, §2.33-10].APPENDIX C PROOF OF PROPOSITION 3 has a closed-form solution when α " 4 as follows where the integral can be solved though integration by parts and with the help of [58, §2.33-10] and [58, §2.33-16]. Then, substituting α " 4 andI " β ? 2π ż ϑ zf Z pzqdz " r1´F Z p qsr1´F Z pϑqs 2 ? 2πξ 3 2 " 2β a ξpp1´F Z pϑqq´1p1`ξ q pp1´F Z p qq´1p1`ξϑqqq exppζλp ? `?ϑq`ξp `ϑq`ζ 2 λ 2 4ξ q ? πβζ " erfˆζ λ 2 ? ξ`a ξ˙´erfˆζ λ 2 ? ξ`a ξ˙  , (17) Notice that we denote F Z pzq " F Z pz|α, λ, ζ, ξq and f Z pzq " f Z pz|α, λ, ζ, ξq, except when the parameters are being manipulated, then we explicitly indicate them. Ericsson mobility report on the pulse of the networked society. Ericsson, Ericsson White PapersEricsson, "Ericsson mobility report on the pulse of the networked society," Ericsson White Papers, June 2015. [Online]. Available: http://www.ericsson.com Unlocking the Potential of the Internet of Things. J Manyika, McKinsey Global InstituteJ. Manyika et al., "Unlocking the Potential of the Internet of Things," McKinsey Global Institute, 2015. A survey on internet of things from industrial market perspective. C Perera, C H Liu, S Jayawardena, M Chen, IEEE Access. 2C. Perera, C. H. Liu, S. Jayawardena, and M. Chen, "A survey on internet of things from industrial market perspective," IEEE Access, vol. 2, pp. 1660-1679, 2014. Next generation M2M cellular networks: challenges and practical considerations. A Ali, W Hamouda, M Uysal, IEEE Communications Magazine. 539A. Ali, W. Hamouda, and M. Uysal, "Next generation M2M cellular net- works: challenges and practical considerations," IEEE Communications Magazine, vol. 53, no. 9, pp. 18-24, 2015. The metis 5G system concept-meeting the 5G requirements. H Tullberg, P Popovski, Z Li, M A Uusitalo, A Höglund, Ö Bulakci, M Fallgren, J F Monserrat, IEEE Communications Magazine. 5412H. Tullberg, P. Popovski, Z. Li, M. A. Uusitalo, A. Höglund, Ö. Bulakci, M. Fallgren, and J. F. Monserrat, "The metis 5G system concept-meeting the 5G requirements," IEEE Communications Magazine, vol. 54, no. 12, pp. 132-139, 2016. Massive machine-type communications in 5G: Physical and mac-layer solutions. C Bockelmann, N Pratas, H Nikopour, K Au, T Svensson, C Stefanovic, P Popovski, A Dekorsy, IEEE Communications Magazine. 549C. Bockelmann, N. Pratas, H. Nikopour, K. Au, T. Svensson, C. Ste- fanovic, P. Popovski, and A. Dekorsy, "Massive machine-type communi- cations in 5G: Physical and mac-layer solutions," IEEE Communications Magazine, vol. 54, no. 9, pp. 59-65, 2016. Ultrareliable short-packet communications with wireless energy transfer. O L A López, H Alves, R D Souza, E M G Fernández, IEEE Signal Processing Letters. 244O. L. A. López, H. Alves, R. D. Souza, and E. M. G. Fernández, "Ul- trareliable short-packet communications with wireless energy transfer," IEEE Signal Processing Letters, vol. 24, no. 4, pp. 387-391, 2017. Ultrareliable cooperative short-packet communications with wireless energy transfer. O López, E Fernández, R Demo Souza, H Alves, IEEE Sensors Journal. O. Alcaraz López, E. Fernández, R. Demo Souza, and H. Alves, "Ultra- reliable cooperative short-packet communications with wireless energy transfer," IEEE Sensors Journal. Toward Massive Machine Type Cellular Communications. Z Dawy, W Saad, A Ghosh, J G Andrews, E Yaacoub, IEEE Wirel. Commun. 241Z. Dawy, W. Saad, A. Ghosh, J. G. Andrews, and E. Yaacoub, "Toward Massive Machine Type Cellular Communications," IEEE Wirel. Commun., vol. 24, no. 1, pp. 120-128, feb 2017. [Online]. Latency critical iot applications in 5G: Perspective on the design of radio interface and network architecture. P Schulz, M Matthe, H Klessig, M Simsek, G Fettweis, J Ansari, S A Ashraf, B Almeroth, J Voigt, I Riedel, IEEE Communications Magazine. 552P. Schulz, M. Matthe, H. Klessig, M. Simsek, G. Fettweis, J. Ansari, S. A. Ashraf, B. Almeroth, J. Voigt, I. Riedel et al., "Latency critical iot applications in 5G: Perspective on the design of radio interface and network architecture," IEEE Communications Magazine, vol. 55, no. 2, pp. 70-78, 2017. 5g mobile cellular networks: Enabling distributed state estimation for smart grids. M Cosovic, A Tsitsimelis, D Vukobratovic, J Matamoros, C Anton-Haro, IEEE Communications Magazine. 5510M. Cosovic, A. Tsitsimelis, D. Vukobratovic, J. Matamoros, and C. Anton-Haro, "5g mobile cellular networks: Enabling distributed state estimation for smart grids," IEEE Communications Magazine, vol. 55, no. 10, pp. 62-69, OCTOBER 2017. D1. 1 refined scenarios and requirements, consolidated use cases, and qualitative techno-economic feasibility assessment. J F M Del Río, D. M.-S Gandía, J. F. M. del Río and D. M.-S. Gandía, "D1. 1 refined scenarios and requirements, consolidated use cases, and qualitative techno-economic feasibility assessment," https://metis-ii.5g-ppp.eu/wp-content/uploads/ deliverables/METIS-II_D1.1_v1.0.pdf, 2016. Next generation/dynamic spectrum access/cognitive radio wireless networks: A survey. I F Akyildiz, W.-Y Lee, M C Vuran, S Mohanty, Computer networks. 5013I. F. Akyildiz, W.-Y. Lee, M. C. Vuran, and S. Mohanty, "Next generation/dynamic spectrum access/cognitive radio wireless networks: A survey," Computer networks, vol. 50, no. 13, pp. 2127-2159, 2006. Maximizing the link throughput between smart meters and aggregators as secondary users under power and outage constraints. P H Nardelli, M De Castro Tomé, H Alves, C H De Lima, M Latva-Aho, Ad Hoc Networks. 41P. H. Nardelli, M. de Castro Tomé, H. Alves, C. H. de Lima, and M. Latva-aho, "Maximizing the link throughput between smart meters and aggregators as secondary users under power and outage constraints," Ad Hoc Networks, vol. 41, pp. 57-68, 2016. Sharing spectrum through spectrum policy reform and cognitive radio. J M Peha, Proceedings of the IEEE. 974J. M. Peha, "Sharing spectrum through spectrum policy reform and cognitive radio," Proceedings of the IEEE, vol. 97, no. 4, pp. 708-719, 2009. Primary radio user activity models for cognitive radio networks: A survey. Y Saleem, M H Rehmani, Journal of Network and Computer Applications. 43Y. Saleem and M. H. Rehmani, "Primary radio user activity models for cognitive radio networks: A survey," Journal of Network and Computer Applications, vol. 43, pp. 1-16, 2014. Cognitive radio based hierarchical communications infrastructure for smart grid. R Yu, Y Zhang, S Gjessing, C Yuen, S Xie, M Guizani, IEEE network. 255R. Yu, Y. Zhang, S. Gjessing, C. Yuen, S. Xie, and M. Guizani, "Cognitive radio based hierarchical communications infrastructure for smart grid," IEEE network, vol. 25, no. 5, 2011. Cognitive radio networks for smart grid applications: A promising technology to overcome spectrum inefficiency. V C Gungor, D Sahin, IEEE Vehicular Technology Magazine. 72V. C. Gungor and D. Sahin, "Cognitive radio networks for smart grid ap- plications: A promising technology to overcome spectrum inefficiency," IEEE Vehicular Technology Magazine, vol. 7, no. 2, pp. 41-46, 2012. An opportunistic spectrum access scheme for multicarrier cognitive sensor networks. D Darsena, G Gelli, F Verde, IEEE Sensors Journal. 178D. Darsena, G. Gelli, and F. Verde, "An opportunistic spectrum access scheme for multicarrier cognitive sensor networks," IEEE Sensors Jour- nal, vol. 17, no. 8, pp. 2596-2606, 2017. Interference-controlled transmission schemes for cognitive radio in frequency-selective timevarying fading channels. W Qiu, B Xie, H Minn, C.-C Chong, IEEE Transactions on Wireless Communications. 111W. Qiu, B. Xie, H. Minn, and C.-C. Chong, "Interference-controlled transmission schemes for cognitive radio in frequency-selective time- varying fading channels," IEEE Transactions on Wireless Communica- tions, vol. 11, no. 1, pp. 142-153, 2012. Convolutive superposition for multicarrier cognitive radio systems. D Darsena, G Gelli, F Verde, IEEE Journal on Selected Areas in Communications. 3411D. Darsena, G. Gelli, and F. Verde, "Convolutive superposition for multicarrier cognitive radio systems," IEEE Journal on Selected Areas in Communications, vol. 34, no. 11, pp. 2951-2967, 2016. Cooperative spectrum sharing via controlled amplify-and-forward relaying. Y Han, A Pandharipande, S H Ting, IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Communications, PIMRC. IEEEY. Han, A. Pandharipande, and S. H. Ting, "Cooperative spectrum sharing via controlled amplify-and-forward relaying," in IEEE 19th International Symposium on Personal, Indoor and Mobile Radio Com- munications, PIMRC 2008. IEEE, 2008, pp. 1-5. Cooperative decode-and-forward relaying for secondary spectrum access. IEEE Transactions on Wireless Communications. 810--, "Cooperative decode-and-forward relaying for secondary spectrum access," IEEE Transactions on Wireless Communications, vol. 8, no. 10, 2009. An amplify-andforward scheme for spectrum sharing in cognitive radio channels. F Verde, A Scaglione, D Darsena, G Gelli, IEEE Transactions on Wireless Communications. 1410F. Verde, A. Scaglione, D. Darsena, and G. Gelli, "An amplify-and- forward scheme for spectrum sharing in cognitive radio channels," IEEE Transactions on Wireless Communications, vol. 14, no. 10, pp. 5629- 5642, 2015. Time and power allocation for collaborative primary-secondary transmission using superposition coding. E.-H Shin, D Kim, IEEE Communications Letters. 152E.-H. Shin and D. Kim, "Time and power allocation for collabora- tive primary-secondary transmission using superposition coding," IEEE Communications Letters, vol. 15, no. 2, pp. 196-198, 2011. Micro operators to boost local service delivery in 5G. M Matinmikko, M Latva-Aho, A Petri, S Yrjölä, T Koivumäki, Wireless Personal Communications. In pressM. Matinmikko, M. Latva-aho, A. Petri, S. Yrjölä, and T. Koivumäki, "Micro operators to boost local service delivery in 5G," Wireless Personal Communications, vol. In press, May 2017. Channel coding rate in the finite blocklength regime. Y Polyanskiy, H V Poor, S Verdú, IEEE Trans. Inf. Theory. 565Y. Polyanskiy, H. V. Poor, and S. Verdú, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307-2359, 2010. On the design of low-density parity-check codes within 0.0045 db of the shannon limit. S.-Y Chung, G D Forney, T J Richardson, R Urbanke, IEEE Communications letters. 52S.-Y. Chung, G. D. Forney, T. J. Richardson, and R. Urbanke, "On the design of low-density parity-check codes within 0.0045 db of the shannon limit," IEEE Communications letters, vol. 5, no. 2, pp. 58-60, 2001. Two-way communication channels. C E Shannon, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. the Fourth Berkeley Symposium on Mathematical Statistics and Probability1Contributions to the Theory of Statistics. The Regents of the University of CaliforniaC. E. Shannon et al., "Two-way communication channels," in Proceed- ings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. The Regents of the University of California, 1961. Towards Massive, Ultra-Reliable, and Low-Latency Wireless Communication with Short Packets. G Durisi, T Koch, P Popovski, Proceedings of IEEE. 1049G. Durisi, T. Koch, and P. Popovski, "Towards Massive, Ultra-Reliable, and Low-Latency Wireless Communication with Short Packets," Pro- ceedings of IEEE, vol. 104, no. 9, pp. 1711-1726, sep 2016. Finite Block-Length Analysis of the Incremental Redundancy HARQ. B Makki, T Svensson, M Zorzi, IEEE Wirel. Commun. Lett. 35B. Makki, T. Svensson, and M. Zorzi, "Finite Block-Length Analysis of the Incremental Redundancy HARQ," IEEE Wirel. Commun. Lett., vol. 3, no. 5, pp. 529-532, oct 2014. Finite Block-Length Analysis of Spectrum Sharing Networks: Interference-Constrained Scenario. IEEE Wirel. Commun. Lett. 44--, "Finite Block-Length Analysis of Spectrum Sharing Networks: Interference-Constrained Scenario," IEEE Wirel. Commun. Lett., vol. 4, no. 4, pp. 433-436, aug 2015. On the performance of amplifier-aware dense networks: Finite block-length analysis. B Makki, C Fang, T Svensson, M Nasiri-Kenari, 2016 International Conference on Computing, Networking and Communications (ICNC). IEEEB. Makki, C. Fang, T. Svensson, and M. Nasiri-Kenari, "On the performance of amplifier-aware dense networks: Finite block-length analysis," in 2016 International Conference on Computing, Networking and Communications (ICNC). IEEE, feb 2016, pp. 1-5. Machine-to-machine communications for home energy management system in smart grid. D Niyato, L Xiao, P Wang, IEEE Communications Magazine. 494D. Niyato, L. Xiao, and P. Wang, "Machine-to-machine communications for home energy management system in smart grid," IEEE Communi- cations Magazine, vol. 49, no. 4, 2011. Models for the modern power grid. P H J Nardelli, N Rubido, C Wang, M S Baptista, C Pomalaza-Raez, P Cardieri, M Latva-Aho, 10.1140/epjst/e2014-02219-6The European Physical Journal Special Topics. 22312P. H. J. Nardelli, N. Rubido, C. Wang, M. S. Baptista, C. Pomalaza- Raez, P. Cardieri, and M. Latva-aho, "Models for the modern power grid," The European Physical Journal Special Topics, vol. 223, no. 12, pp. 2423-2437, 2014. [Online]. Available: http: //dx.doi.org/10.1140/epjst/e2014-02219-6 The role of smart meters in enabling real-time energy services for households: The italian case. A Pitì, G Verticale, C Rottondi, A Capone, L Lo, Schiavo, Energies. 102199A. Pitì, G. Verticale, C. Rottondi, A. Capone, and L. Lo Schiavo, "The role of smart meters in enabling real-time energy services for households: The italian case," Energies, vol. 10, no. 2, p. 199, 2017. Communication network requirements for major smart grid applications in HAN, NAN and WAN. M Kuzlu, M Pipattanasomporn, S Rahman, Computer Networks. 67M. Kuzlu, M. Pipattanasomporn, and S. Rahman, "Communication network requirements for major smart grid applications in HAN, NAN and WAN," Computer Networks, vol. 67, pp. 74-88, 2014. Scenarios for 5G mobile and wireless communications: the vision of the metis project. A Osseiran, F Boccardi, V Braun, K Kusume, P Marsch, M Maternia, O Queseth, M Schellmann, H Schotten, H Taoka, IEEE Communications Magazine. 525A. Osseiran, F. Boccardi, V. Braun, K. Kusume, P. Marsch, M. Maternia, O. Queseth, M. Schellmann, H. Schotten, H. Taoka et al., "Scenarios for 5G mobile and wireless communications: the vision of the metis project," IEEE Communications Magazine, vol. 52, no. 5, pp. 26-35, 2014. Scenarios, requirements and kpis for 5G mobile and wireless system. M Fallgren, B Timus, METIS deliverable D. 11M. Fallgren, B. Timus et al., "Scenarios, requirements and kpis for 5G mobile and wireless system," METIS deliverable D, vol. 1, p. 1, 2013. Short-Packet Communications over Multiple-Antenna Rayleigh-Fading Channels. G Durisi, T Koch, J Ostman, Y Polyanskiy, W Yang, IEEE Trans. Commun. 642G. Durisi, T. Koch, J. Ostman, Y. Polyanskiy, and W. Yang, "Short- Packet Communications over Multiple-Antenna Rayleigh-Fading Chan- nels," IEEE Trans. Commun., vol. 64, no. 2, pp. 1-11, feb 2016. Quasi-Static Multiple-Antenna Fading Channels at Finite Blocklength. W Yang, G Durisi, T Koch, Y Polyanskiy, IEEE Trans. Inf. Theory. 607W. Yang, G. Durisi, T. Koch, and Y. Polyanskiy, "Quasi-Static Multiple- Antenna Fading Channels at Finite Blocklength," IEEE Trans. Inf. Theory, vol. 60, no. 7, pp. 4232-4265, jul 2014. M Abramowitz, I A Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover9th ed.M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th ed. Dover, 1965. Stochastic geometry for wireless networks. M Haenggi, Cambridge University PressM. Haenggi, Stochastic geometry for wireless networks. Cambridge University Press, 2012. Modeling interference in wireless ad hoc networks. P Cardieri, IEEE Communications Surveys & Tutorials. 124P. Cardieri, "Modeling interference in wireless ad hoc networks," IEEE Communications Surveys & Tutorials, vol. 12, no. 4, pp. 551-572, 2010. Contentionbased geographic forwarding strategies for wireless sensors networks. C H De Lima, P H Nardelli, H Alves, M Latva-Aho, IEEE Sensors Journal. 167C. H. de Lima, P. H. Nardelli, H. Alves, and M. Latva-aho, "Contention- based geographic forwarding strategies for wireless sensors networks," IEEE Sensors Journal, vol. 16, no. 7, pp. 2186-2195, 2016. On the joint impact of beamwidth and orientation error on throughput in directional wireless poisson networks. J Wildman, P H J Nardelli, M Latva-Aho, S Weber, IEEE Transactions on Wireless Communications. 1312J. Wildman, P. H. J. Nardelli, M. Latva-aho, and S. Weber, "On the joint impact of beamwidth and orientation error on throughput in directional wireless poisson networks," IEEE Transactions on Wireless Communications, vol. 13, no. 12, pp. 7072-7085, 2014. Stochastic geometry for modeling, analysis, and design of multi-tier and cognitive cellular wireless networks: A survey. H Elsawy, E Hossain, M Haenggi, IEEE Communications Surveys & Tutorials. 153H. ElSawy, E. Hossain, and M. Haenggi, "Stochastic geometry for modeling, analysis, and design of multi-tier and cognitive cellular wire- less networks: A survey," IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 996-1019, 2013. Joint sampling-communication strategies for smart-meters to aggregator link as secondary users. M C Tomé, P H Nardelli, H Alves, M Latva-Aho, Energy Conference (ENERGYCON). M. C. Tomé, P. H. Nardelli, H. Alves, and M. Latva-aho, "Joint sampling-communication strategies for smart-meters to aggregator link as secondary users," in Energy Conference (ENERGYCON), 2016 IEEE International. IEEE, 2016, pp. 1-6. Wireless powered communications with finite battery and finite blocklength. O L A López, E M G Fernández, R D Souza, H Alves, IEEE Transactions on Communications. O. L. A. López, E. M. G. Fernández, R. D. Souza, and H. Alves, "Wire- less powered communications with finite battery and finite blocklength," IEEE Transactions on Communications, 2017. A hybrid arq scheme with parity retransmission for error control of satellite channels. S Lin, P Yu, IEEE Transactions on Communications. 307S. Lin and P. Yu, "A hybrid arq scheme with parity retransmission for error control of satellite channels," IEEE Transactions on Communica- tions, vol. 30, no. 7, pp. 1701-1719, 1982. Analysis of rate optimized throughput for large-scale MIMO-(h) ARQ schemes. P Larsson, L K Rasmussen, M Skoglund, Global Communications Conference (GLOBECOM). IEEEP. Larsson, L. K. Rasmussen, and M. Skoglund, "Analysis of rate optimized throughput for large-scale MIMO-(h) ARQ schemes," in Global Communications Conference (GLOBECOM), 2014 IEEE. IEEE, 2014, pp. 3760-3765. Optimal power allocation for hybrid ARQ with chase combining in iid rayleigh fading channels. T V Chaitanya, E G Larsson, IEEE Transactions on Communications. 615T. V. Chaitanya and E. G. Larsson, "Optimal power allocation for hybrid ARQ with chase combining in iid rayleigh fading channels," IEEE Transactions on Communications, vol. 61, no. 5, pp. 1835-1846, 2013. Ultra reliable communication via optimum power allocation for type-I ARQ in finite block-length. E Dosti, U L Wijewardhana, H Alves, M Latva-Aho, arXiv:1701.08617arXiv preprintE. Dosti, U. L. Wijewardhana, H. Alves, and M. Latva-aho, "Ultra reliable communication via optimum power allocation for type-I ARQ in finite block-length," arXiv preprint arXiv:1701.08617, 2017. Performance comparison of type I, II and III hybrid ARQ schemes over AWGN channels. M W Bahri, H Boujernaa, M Siala, IEEE International Conference on. 3IEEEin Industrial Technology, 2004. IEEE ICIT'04M. W. El Bahri, H. Boujernaa, and M. Siala, "Performance comparison of type I, II and III hybrid ARQ schemes over AWGN channels," in Industrial Technology, 2004. IEEE ICIT'04. 2004 IEEE International Conference on, vol. 3. IEEE, 2004, pp. 1417-1421. Overview LTE PHY: Part 1-principles and numerology etc. E Seidel, Nomor 3GPP Newsletter. E. Seidel, "Overview LTE PHY: Part 1-principles and numerology etc," Nomor 3GPP Newsletter, 2007. 5G for Mission Critical Communication: Achieve ultrareliability and virtual zero latency. Nokia, Nokia White PapNokia, "5G for Mission Critical Communication: Achieve ultra- reliability and virtual zero latency," Nokia White Pap., 2016. A Survey on Approximations of One-Dimensional Gaussian Q-Function. V Nguyen, Q Bao, L P Tuyen, H H Tue, V. Nguyen, Q. Bao, L. P. Tuyen, and H. H. Tue, "A Survey on Approximations of One-Dimensional Gaussian Q-Function," 2015. I Gradshteyn, I Ryzhik, Table of Integrals, Series, and Products. A. Jeffrey and D. ZwillingerElsevier7th ed.I. Gradshteyn and I. Ryzhik, Table of Integrals, Series, and Products, 7th ed., A. Jeffrey and D. Zwillinger, Eds. Elsevier, 2007.
[]
[ "Extrinisic Calibration of a Camera-Arm System Through Rotation Identification", "Extrinisic Calibration of a Camera-Arm System Through Rotation Identification" ]
[ "Steve Mcguire ", "Christoffer Heckman ", "Daniel Szafir ", "Simon Julier ", "Nisar Ahmed " ]
[]
[]
Determining extrinsic calibration parameters is a necessity in any robotic system composed of actuators and cameras. Once a system is outside the lab environment, parameters must be determined without relying on outside artifacts such as calibration targets. We propose a method that relies on structured motion of an observed arm to recover extrinsic calibration parameters. Our method combines known arm kinematics with observations of conics in the image plane to calculate maximum-likelihood estimates for calibration extrinsics. This method is validated in simulation and tested against a real-world model, yielding results consistent with ruler-based estimates. Our method shows promise for estimating the pose of a camera relative to an articulated arm's end effector without requiring tedious measurements or external artifacts.
null
[ "https://arxiv.org/pdf/1812.08280v1.pdf" ]
56,517,414
1812.08280
a7a136253da5767986ae6e086f003ba8523d81aa
Extrinisic Calibration of a Camera-Arm System Through Rotation Identification Steve Mcguire Christoffer Heckman Daniel Szafir Simon Julier Nisar Ahmed Extrinisic Calibration of a Camera-Arm System Through Rotation Identification Index Terms-roboticshand-eye problemself-calibrationstructure from motion Determining extrinsic calibration parameters is a necessity in any robotic system composed of actuators and cameras. Once a system is outside the lab environment, parameters must be determined without relying on outside artifacts such as calibration targets. We propose a method that relies on structured motion of an observed arm to recover extrinsic calibration parameters. Our method combines known arm kinematics with observations of conics in the image plane to calculate maximum-likelihood estimates for calibration extrinsics. This method is validated in simulation and tested against a real-world model, yielding results consistent with ruler-based estimates. Our method shows promise for estimating the pose of a camera relative to an articulated arm's end effector without requiring tedious measurements or external artifacts. I. INTRODUCTION In robotics, data fusion between multiple sensors is frequently required in order to accomplish some task in an operating environment. Often, the relative locations between various components must be known to a high degree of precision; any error in relative pose propagates throughout the system without possibility for correction. These locations can be determined via idealized means, such as computeraided design, or estimated means, such as motion capture systems or tag trackers. For many sensor types, the precise point of measurement may be buried within a housing or even within an integrated circuit, making direct measurement impractical. Furthermore, such idealized measurements may fail to capture assembly variations or installation error. In this work we consider the goal of calibrating an assembled group of kinematically linked sensors and actuators as shown in Fig. 1. To support the idea of "bolt together and go" robotics, calibration should be determined by operating the assembly in its environment and analyzing the associated output without using calibration targets or precision artifacts. Such scenarios arise frequently in the operation of robotic arms, boom cameras, and other articulated sensor mounts used for capturing scientific data. A significant barrier in such operations is measuring the exact relative transforms between various sections of the kinematic chain, such as the transform from an imaging sensor to an articulated end effector. We develop an approach to calibration that does not Fig. 1: Overview of the hand-eye calibration problem on a mobile robot base. A fixed camera rigidly mounted to the base of a robot arm observes the arm's end effector movements. rely on highly precise measurements for difficult-to-determine quantities; instead, we use a camera to observe the arm's motion and estimate the required quantities. Our approach estimates relative poses between an actuator and camera by observing features on the actuator under structured motion. These features trace out circles in 3-D world space; we estimate a 3-D model to be projected into the image plane in order to find relative poses between an actuator and an observing camera. The present work is the first application of the tracking of features to model 3-D circles in space, rather than modeling elliptical paths in projective space. This approach has many advantages related to noise robustness as described in Section III, and combines significant achievements in structure from motion, shape estimation, camera pose estimation, and self-calibration. The specific problem considered here is to determine extrinsic calibration parameters for a camera-arm system, such that the camera has a view of the end-effector of the arm, as shown in Fig. 2, with an idealized sketch in Fig. 3. An arm has i revolute joints yielding one degree of freedom per joint. Each arm joint produces a measurement θ i , representing the angular deflection of the revolute joint i. The arm has a well-known kinematic model, assumed to be precision manufactured. For this work, the final joint in the arm is a wrist-type joint with a rigidly-mounted end effector. The camera is assumed to be calibrated for intrinsic parameters In this experiment, a six-jointed Kinova Jaco2 arm is used. θ 0 θ 1 θ 2 T AC θ 3 Arm frame Camera frame Fig. 3: Simplified sketch of setup. In this setup, the world frame is defined as the origin of the arm's kinematic chain. The camera is fixed with respect to the arm's kinematic origin via rigid transform T AC . Note that our method does not assume coplanar rotations as suggested by the diagram. via other means (e.g. [1]) and rigidly mounted via a fixed, but unknown, transform T AC to the origin of the arm's kinematic chain. II. RELATED WORK Camera-arm calibration is typically referred to as the "handeye' problem in robotics and may be specified with either a stationary camera observing the motion of an arm or a mobile camera rigidly mounted to the end effector of an arm. An example of the current state of the art is from Pradeep et al. [2], where precision calibration targets are moved via well-known robot kinematic models to recover relative transforms between cameras and the known kinematic models. Our work develops a technique for recovering these same transforms without the need for precision calibration targets by exploiting how a camera image changes under robot motion. Specifically, we are interested in wrist-like rotary motions. Through this type of arm motion, we can factor the hand-eye problem into several component problems: feature detection, feature tracking, geometry reconstruction, and finally pose reconstruction. Our work focuses on the geometry reconstruction and the pose reconstruction aspects of the factored hand-eye problem. In our formulation of rotational geometry reconstruction, we are interested in recovering the parameters of a 3D circle from a set of observed image points. This is possible by leveraging an observation from Jiang et al. [3], who originally established a method for reconstruction of a moving object with respect to a fixed camera under the assumption that the motion is constrained to rotation about a single axis. In that work, it was observed that individual points rotating about a single axis will trace out an ellipse under projection. The main focus of [3] relates to reconstruction from a single rotation; we use multiple independent rotations to establish the relative transform from the target object to the camera. In [4], Sawhney et al. describe a method for reconstruction of objects under rotational motion, assuming that the camera undergoes the rotational motion attached to an arm observing a static scene. Sawhney develops much of the mathematical framework that will be used in our method. In [5], Fremont and Chellali address how to create a 3-D reconstruction of an object by rotating it about an axis, but does not address how to estimate the pose of such an object with respect to an arbitrary image plane. In [6], Liu and Hu use a fixed camera to observe a cylindrical spacecraft and estimate pose using a known CAD model to match ellipses to metric features. Their work relies on static imagery and resolves scale using the CAD model; in our formulation, we have no information about the configuration of the end effector. Hutter and Brewer [7] used the approach of approximating elliptical shapes in the image plane corresponding to true circles in 3-space [4], [8], [9] to estimate the pose of vehicle wheels from segmented images for self-driving car applications. Their work recovers the wheel's rotation axis in order to estimate steering angle. Our work accomplishes a parallel goal of recovering the end effector's rotation axis in order to estimate properties of the larger system. Another application is in [10], where pose estimation from an image of a well-known ellipse is used to calibrate a laser-camera system. However, the parameters of the underlying circle must be known a priori in the form of a known calibration target. Lundberg et al. [11] pose a similar problem setup, but use a single well-known feature on the end effector. This well-known feature is precisely positioned in each of a series of frames to form a 'virtual' calibration target. A calibration target is thus assembled in a point-wise manner, allowing calibration to continue using conventional techniques. A key difference between their solution and ours is the uniqueness and the prior knowledge of the well-known feature; our solution can utilize ambient features discovered at runtime. Also connected is that of Forsyth et al. [12], where camera pose was estimated by observing images of well-known conics. In contrast, we observe conics by tracking features of a rotating object over time; a critical distinction is that the world geometry of the observed conics is not well-known. Borghese et al. [13] also applied Forsyth's work for pose estimation, but only applied this to rotated calibration targets and not to generated conic paths in 3-D space resulting from object rotation. III. PROCEDURE The goal of our method is to estimate the camera's location relative to the base of the arm's kinematic chain. First, images are captured of the end effector rotating about the axis of the last joint in the arm while the remainder of the arm is motionless. While the arm is rotating, feature points (described in Sec. III-A) on the end effector are tracked and associated between frames to produce a set of arcs in the image plane. During an initial optimization step, the observed axis of rotation is estimated by requiring that each arc's center of rotation be collinear, with independent radii and distance along the rotation axis from an arbitrary starting point. Meanwhile, the arm's joint angles θ i are captured to calculate the arm's expected axis of rotation in the world frame. This procedure is repeated a number of times n, where the arm is repositioned to a different pose and the process is repeated. After all measurements are collected, the position of the camera in the arm frame is estimated. This technique is presented in video form in the Supplementary Materials. We use the encoding of a pose p ∈ R 6 as composed of six elements: x, y, z, φ, θ, ψ T (the latter three quantities representing roll, pitch, and yaw respectively). Each measurement observes the x, y position in the image plane of m feature points, as well as the arm angular measurements θ i . The set of all unique observed feature points is L. A 4 × 4 homogeneous transform giving the location of object A in coordinate frame B is denoted T BA . Estimates of a particular quantity P are denoted byP . Parameters to be estimated comprise the pose p corresponding to the location of the camera in the arm frame, represented alternatively as p AC ∈ R 6 or T AC as a 4 × 4 homogeneous transformation. These two representations will be used interchangeably via the Lie SE(3) exp() and log() operators where needed. The estimation procedure follows several steps: 1) arc identification, 2) circle estimation, 3) estimation of the rotation axis by vision, 4) rotation axis estimation using forward kinematics, and 5) estimation ofT AC . A. Arc Identification To establish arcs, ambient features on the end effector are tracked across frames. The choice of feature descriptor is arbitrary, so long as the end effector can be tracked across more than three frames to be able to fit a projection of a 3-D circle to the track. The reference implementation utilizes OpenCV's simple blob detector as input to the Lucas-Kanade optical flow algorithm to obviate the effect of local minima. Tracks that are shorter than a minimum threshold are discarded, as are tracks that do not encode sufficient motion between frames. As features potentially rotate out of view, new features are detected as potential tracks. The use of ambient features forms the novelty of our work, eliminating the need for precision targets and/or well-known features. B. Rotation axis identification (vision) Once tracks are assembled, an initial optimization step is performed to recover the shared rotation axis of the generated ellipses. The choice of coordinate axes corresponds to the image frame, with x right, y down, and z forward. In this optimization step, the parameter vector l k consists of 5+2m terms: an (x, y, z) point on a 3-D line, angles θ and φ indicating the rotation along XY and YZ axes, and a radius from the line and displacement from the origin point for each ellipse to be fit. This parameterization enforces a coaxial constraint such that the centers of every circle in 3-D are collinear. In this stage, the camera is set at (0, 0, −1, 0, 0, 0) with respect to the origin. Initial conditions for the 3-D circle optimization are set to (0, 0, 0, 0, 0), with each potential arc having incremental displacement and unit radius. The coordinates to be recovered are in an arbitrarily transformed up to a scale space. Candidate points on circles corresponding to the parameter vector are then generated and projected into the image plane according to the given camera model's projection matrix. For each detected feature's arc, an ellipse is then fit to these projected points belonging to the parameter vector's 3D circle using a direct least squares method [14]. A residual is then composed by summing the distance between projected points and the candidate ellipse in projective space for each feature track. The use of a coaxial constraint on the underlying model used to generate fit ellipses aids in noise robustness, as a poor track can corrupt an individual ellipse fit. For each measurement k, the candidate parameters l k identify a line in 3-D; points are generated along this line, projected into the image plane, and a best-fit line in 2-D is recovered; this line represents the projection of the axis of rotation into the image plane. These lines are stored in point-slope form, yielding an observation z k = (m k , b k ). C. Rotation axis identification (arm) Given that the kinematic chain of the arm is provided, without loss of generality we consider a single joint of the arm. With a revolute joint, the transition through the joint is characterized by seven parameters: six to identify the fixed mechanism of the arm leading into the joint (denoted link L), and one to adjust the output face based on actuator angle (denoted T θi ). Each transform is represented in the parent joint's coordinate system; for joint i, the kinematic chain is therefore represented in the coordinate system of joint i − 1 as: T (i−1)(i) = L i T θi(1) Note that T θi only encodes a single rotation; the axis of rotation of joint i. Therefore, any points of the form 0 0 z T will lie on the rotation axis of joint i, in joint i's coordinate frame. World coordinates of a set of such points may be obtained by applying the arm's kinematic chain using homogeneous transformations. D. Camera to arm rigid transform estimation The estimated rigid transformation between the camera and the base of the armT AC is estimated by applying bundle adjustment over all measurements n. Modeling the image noise as zero-mean Gaussian results in a maximum-likelihood estimator, which can be stated as a nonlinear least squares optimization problem of the form r = e(z, x)R −1 e(z, x)) T (2) where e(z, x) is function of observations z and model parameters x that produces a residual error vector, and R is a block-matrix of weights in the projected axis from the vision system corresponding to the uncertainties in the axis recovery. For this problem, the parameters x to be recovered form a vector x, y, z, φ, θ, ψ T representing the R 6 form of T AC , while the observations z are the stacked coefficients of the line fits z k = (m k , b k ), k = 1...n describing the axis of rotation for each of n measurements. Since the recovered projected centers corresponding to each feature track are each located some unknown distance from the origin of the joint immediately prior to the end effector, we cannot directly compare known world geometry of the end effector to the observed axes of rotation. However, we exploit the fact that the true axis of rotation must be collinear with the observed axis of rotation to form a residual function e. For each of n measurements k, several test points of the form 0 0 z T are generated in the last joint's coordinate frame. A minimum of two test points are required to define the line representing the projection of the arm's axis of rotation in the image plane. While more points can be used, the assumption of a rectified camera model guarantees that any additional points will be collinear. These j test points are first transformed into world coordinates using the arm's kinematic model and then projected into the image plane using the current parameter vector x. For each measurement k, the previously identified line parameters m k b k , the distance d k,j between each projected test point t j = u j v j and its associated projected axis of rotation a k is calculated by: d k,j = −m k u j + v j − b k m 2 k + 1(3) as given in [15]. The distance metric of Equation 3 is used to assemble the residual vector r by concatenating over all test points and all measurements: r = d 11 d 12 ... d 1j d 21 d 22 ... d 2j ... d n1 d n2 ... d nj T (4) We implement our optimization using Levenberg-Marquardt with random restarts to avoid local minima. To avoid numerical instability resulting from the use of finite differences, we apply the technique of [16] to determine the value of the Jacobian at the estimatex, defined as J = ∂r ∂x x(5) Once J is determined, the diagonal terms of (J T J ) −1 estimate the variance in each recovered component ofT AC . E. Measures We evaluate our simulation results through two sets of measures. The first set evaluates the quality of the rotation axis identification from feature track points, while the second evaluates the quality of the camera pose reconstruction. Rotation axis identification quality is measured by relative error in the 2D line parameters m k , b k T for each measurement k; these errors are expressed in pixels −1 and pixels respectively. The camera pose reconstruction quality is measured by the relative error in 3D position parameters x, y, z, φ, θ, ψ T , expressed in units of meters and radians. IV. RESULTS A. Simulation In simulation, observed arc data are generated against a given camera location and arm end effector positions. An example of a simulated camera view is Figure 4, showing the progression from source data to the final measurement that will be used in the optimization, a line through the projected circle centers. To characterize how tracking error propagates through the estimation technique, a Monte Carlo simulation was run at four different levels. Since two stages of optimization are present, we present results at both the ellipse fit stage and the final camera position fit stage. Zero-mean Gaussian noise at σ 2 = {0.1, 0.5, 1.0, 1.5} px 2 was added to the projection of simulated ellipse points into the camera image plane. Six rotation observations were simulated; the error distributions are shown in Fig. 5. Each of these simulation runs was then pushed through the second optimization step to recover the 3D pose of the camera. The error distributions in the Cartesian directions are shown in Fig. 6 and Fig. 7, while the angular errors are shown in Fig. 8 and Fig. 9. B. Physical System The system was tested using an Asus Xtion Pro RGB-D camera and a Kinova Jaco2 arm, shown in Figure 2. Calibration of camera intrinsics occurred offline, while the Kinova Jaco2 forward kinematics were derived from the manufacturer-provided model. The arm's end effector was equipped with a feature-rich covering to ensure adequate ambient features are available to be tracked, shown in Fig. 10. As features were tracked in the image, track data was assembled. In the real system, track data is not as smooth as the simulation; additional processing was implemented to detect jumps in the track that were inconsistent with a smooth arc. A minimum track length was established to remove poor tracks. Measurements were collected independently, with the estimation running offline. Each measurement consisted of approximately sixty seconds of rotation in the wrist joint, After optimization completed, we were able to recover a pose consistent with ruler-based estimates ofT AC . Our covariance estimate reported errors on the order of 0.1mm; these results appear to be excessively optimistic, given the relatively low quality of arc tracks used as observations. Reasons for this optimism could include the omission of error weights in the estimation of (J T J ) −1 , implying that all measurements are equally trustworthy. V. DISCUSSION In simulation, our technique was able to recover reasonably accurate estimates of camera position even in the presence of tracking noise. Given the minimal time required to gather a calibration dataset and modest algorithm execution time (approx 5 minutes with a Matlab implementation), our technique seems practical to implement in real-world scenarios. We were able to demonstrate the function of our system aboard a representative real system with varying success. In the line fit distributions, particularly at higher noise levels such as Fig. 5, we observe that the error terms are strongly correlated; we hypothesize that this correlation is due to the external constraint that all ellipses be coaxial. The noise characteristics appear to be dependent on the viewing angle of the camera with respect to the rotation axis of the arm. When recovering the line fit parameters for the measurement data associated with the wider distributions in Fig. 5, the optimizer terminated with a summed residual value several orders of magnitude higher than the summed residuals of the narrow distributions. By treating large residuals as outliers, the overall robustness of the estimation technique could be improved. When the line fit errors are propagated through to the camera pose recovery step, we note that several position estimates appear to be biased, for example, the Z component error in Fig. 6. These error propagation results lead us to conclude that an uncorrelated and unbiased Gaussian error model in output noise is not appropriate for this estimation technique. The variance of the expected errors are within several centimeters even at the higher noise level. Based on these simulation results, we believe that this technique is worth further investigation. While our method has shown promising results, several important areas remain for further investigation. The effect of each of these areas on our results is demonstrated in the Supplementary Materials. A. Measurement constraints Several conditions on arm pose have been identified as necessary (but not sufficient) to have a convergent optimization. For example, in a single crossing scenario, the end effector of the arm was commanded to have the same x y z position with different orientation values. This setup represents a degenerate pathological problem because the location of the camera is underdetermined. Possible locations for the camera lay on a line in 3D space perpendicular to the 2D projection of the crossing point; since the residual computation only considers distance from a test point to the associated rotation axis, all camera positions yield the same residual value. Camera orientation can be recovered, but position cannot. To avoid this condition, a minimum of three measurements are required, with one measurement having a different position from the other two. B. Differences between simulation and real robot The performance under simulation appears to recover the camera position more accurately than the use of a real system; we have several hypotheses to explore these differences. In the real system, a physical camera was used with a basic linear camera model that fails to completely replicate the ideal camera used in simulation. There is also quantization error in the camera vs simulation, particularly in the generation of arc portions used in the ellipse fit stage. In simulation, these arc points are carried through with double-precision floating point values throughout; while subpixel resolution techniques in real imagery can help to mitigate this effect, further work is required to improve the fidelity of the simulation. We studied error in tracking by introducing zeromean independent Gaussian noise into the arc track data image coordinates; observing the real tracks produced in our reference implementation, shown in Fig. 10, a Gaussian noise model may not be appropriate. C. Optimization initial values In both stages of optimization, testing has revealed a significant dependency on initial conditions to the optimizer to yield a convergent solution due to degeneracies in the 3D circle fitting procedure; this phenomenon is very common in SLAM algorithms as noted by [17]. Since the residual is defined as the distance between the projected ideal circle and the observed arc track, there is no analysis of curvature to ensure that the projected ideal circle is approximately oriented at initial evaluation of the residual function. D. Known kinematics This approach relies on the manufacturer-provided model of the arm kinematics in order to resolve the rotation axis at the end effector with respect to the base of the arm as truth. This assumption could be improved by adding optimization parameters such as angular encoder bias to the various joints of the arm, to allow for installation errors. Other types of deviations from the manufacturer model, such as damage or wear, offer significant challenges in modeling but might be amenable to an online error analysis such as in [1]. E. Precalibrated camera For our optimization to be successful, we require known camera parameters and linearly rectified camera data as input. An important observation is that monocular self-calibration techniques such as [18] and our technique are not exclusive; as self-calibration relies on motion in the environment and the arm is rigidly mounted to the camera, the arm will appear as a static obstruction. This requires the arm to be held stationary during calibration, while the reference implementation of the arc tracker requires the background to be stationary. F. Tracker noise In the reference implementation, a basic tracker was used that did not attempt to enforce any motion dynamics on the detected tracks. An improvement on this work would include integrating a more advanced tracker that can incorporate a motion model to smooth out detected arcs and make the tracker more robust to noisy image data. G. Path planning for calibration poses An important unanswered question in our technique is the method of determining what constitutes a 'good' pose for use in calibration. Ideally, a statistical metric could be determined to evaluate the information content of a new pose given the set of existing poses; this metric could then be used to develop a path planner that identifies a series of most valuable poses for reducing error in the final estimate ofT AC under constraints of robot kinematics and camera field of view. H. Coordinate system consistency While we have the convenience of defining our coordinate frames without regard to external needs, a real system must take into account a wide range of sensor data that may or may not even use the same coordinate or rotation convention. Multiple coordinate system standards pose a hazard to interpretation and development of calibration routines and reduce the generality of our technique. I. Incorporating measurement weights In future work, we envision using the residual value of the rotation axis identification stage as a means of weighting the relative value of that measurement in the pose reconstruction stage. The residuals of all measurements could be set on a scale to aid in outlier rejection, such that poor tracking in one particular measurement does not contaminate the overall result. VI. CONCLUSIONS We have presented a method for calibrating an articulated arm with a wrist joint to a camera without requiring calibration targets, instead relying on structured arm motion and ambient features on the end effector. Our method requires no knowledge of the mechanism of the end effector or unique features to be present. We have validated our results in simulation and demonstrated with real-world data yielding promising results. A Monte Carlo analysis of error propagation verifies that small errors in the quality of feature tracks do not cause the resulting position estimates to degrade dramatically. Further testing is required to explore system degeneracies and validate performance in realistic environments. VII. ACKNOWLEDGMENTS S. McGuire is supported by a NASA Space Technology Research Fellowship through grant number NNX15AQ14H. This work is also supported by the Toyota Motor Corporation. Fig. 2 : 2Real-world setup of the calibration problem showing a fixed camera rigidly mounted with respect to a robotic arm. Fig. 4 : 4Simulated projected view of a set of three feature tracks. Source track data are in blue stars, fitted ellipses in solid red, and projected axis of rotation through circle centers in green. Fig. 5 :Fig. 6 : 56Error distribution of rotation axis fits in with σ 2 = 0.1 px 2 and σ 2 = 1.5 px 2 noise showing error correlation and impact of increased tracking noise Cartesian position error distribution with σ 2 = 0.1 px 2 noise Fig. 8 : 8Angular error distribution with σ 2 = 0.1 px 2 noise followed by an arm reposition for the next measurement. Fig. 9 : 9Angular error distribution with σ 2 = 1.5 px 2 noise Fig. 10: Four views of the real system under test, showing an overview, the camera view, the detected features, and the tracked arcs Asynchronous adaptive conditioning for visual-inertial SLAM. N Keivan, G Sibley, I. J. Robotic Res. 3413N. Keivan and G. Sibley, "Asynchronous adaptive conditioning for visual-inertial SLAM," I. J. Robotic Res., vol. 34, no. 13, pp. 1573- 1589, 2015. Calibrating a multi-arm multi-sensor robot: A bundle adjustment approach. V Pradeep, K Konolige, E Berger, Experimental Robotics. SpringerV. Pradeep, K. Konolige, and E. Berger, "Calibrating a multi-arm multi-sensor robot: A bundle adjustment approach," in Experimental Robotics. Springer, 2014, pp. 211-225. Geometry of single axis motions using conic fitting. G Jiang, H Tsui, L Quan, A Zisserman, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2510G. Jiang, H.-t. Tsui, L. Quan, and A. Zisserman, "Geometry of single axis motions using conic fitting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 10, pp. 1343-1348, 2003. Description and reconstruction from image trajectories of rotational motion. H S Sawhney, J Oliensis, A R Hanson, Computer Vision, 1990. Proceedings, Third International Conference on. IEEEH. S. Sawhney, J. Oliensis, and A. R. Hanson, "Description and recon- struction from image trajectories of rotational motion," in Computer Vision, 1990. Proceedings, Third International Conference on. IEEE, 1990, pp. 494-498. Critical factors in sem 3d stereo microscopy. F Marinello, P Bariani, E Savio, A Horsewell, L De Chiffre, Measurement Science and Technology. 19665705F. Marinello, P. Bariani, E. Savio, A. Horsewell, and L. De Chiffre, "Critical factors in sem 3d stereo microscopy," Measurement Science and Technology, vol. 19, no. 6, p. 065705, 2008. Relative pose estimation for cylinder-shaped spacecrafts using single image. C Liu, W Hu, IEEE Transactions on. 504Aerospace and Electronic SystemsC. Liu and W. Hu, "Relative pose estimation for cylinder-shaped spacecrafts using single image," Aerospace and Electronic Systems, IEEE Transactions on, vol. 50, no. 4, pp. 3036-3056, 2014. Matching 2-d ellipses to 3-d circles with application to vehicle pose estimation. M Hutter, N Brewer, arXiv:0912.3589arXiv preprintM. Hutter and N. Brewer, "Matching 2-d ellipses to 3-d circles with application to vehicle pose estimation," arXiv preprint arXiv:0912.3589, 2009. 3d interpretation of conics and orthogonality. K Kanatani, W Liu, CVGIP: Image understanding. 583K. Kanatani and W. Liu, "3d interpretation of conics and orthogonality," CVGIP: Image understanding, vol. 58, no. 3, pp. 286-301, 1993. Camera calibration with two arbitrary coplanar circles. Q Chen, H Wu, T Wada, Computer Vision-ECCV. SpringerQ. Chen, H. Wu, and T. Wada, "Camera calibration with two arbitrary coplanar circles," in Computer Vision-ECCV 2004. Springer, 2004, pp. 521-532. Automatic calibration of a range sensor and camera system. H Alismail, L D Baker, B Browning, 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on. IEEEH. Alismail, L. D. Baker, and B. Browning, "Automatic calibration of a range sensor and camera system," in 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2012 Second International Conference on. IEEE, 2012, pp. 286-292. Intrinsic camera and handeye calibration for a robot vision system using a point marker. I Lundberg, M Bjorkman, P Ogren, 14th IEEE-RAS International Conference on. IEEEHumanoid Robots (Humanoids)I. Lundberg, M. Bjorkman, and P. Ogren, "Intrinsic camera and hand- eye calibration for a robot vision system using a point marker," in Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on. IEEE, 2014, pp. 59-66. Invariant descriptors for 3d object recognition and pose. D Forsyth, J L Mundy, A Zisserman, C Coelho, A Heller, C Rothwell, IEEE Transactions on Pattern Analysis & Machine Intelligence. 10D. Forsyth, J. L. Mundy, A. Zisserman, C. Coelho, A. Heller, and C. Rothwell, "Invariant descriptors for 3d object recognition and pose," IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 10, pp. 971-991, 1991. Linear pose estimate from corresponding conics. I Frosio, A Alzati, M Bertolini, C Turrini, N A Borghese, Pattern Recognition. 4512I. Frosio, A. Alzati, M. Bertolini, C. Turrini, and N. A. Borghese, "Linear pose estimate from corresponding conics," Pattern Recognition, vol. 45, no. 12, pp. 4169-4181, 2012. Direct least-squares fitting of ellipses. M Fitzgibbon, A W Pilu, R B Fisher, 21M. Fitzgibbon, A. W.and Pilu and R. B. Fisher, "Direct least-squares fitting of ellipses," vol. 21, no. 5, pp. 476-480, May 1999. Point-line distance-2-dimensional. E W Weisstein, E. W. Weisstein, "Point-line distance-2-dimensional," http://mathworld. wolfram.com/Point-LineDistance2-Dimensional.html, accessed: 2016- 03-06. The complex-step derivative approximation. J R Martins, P Sturdza, J J Alonso, ACM Transactions on Mathematical Software (TOMS). 293J. R. Martins, P. Sturdza, and J. J. Alonso, "The complex-step derivative approximation," ACM Transactions on Mathematical Software (TOMS), vol. 29, no. 3, pp. 245-262, 2003. Slam-loop closing with visually salient features. P Newman, K Ho, Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on. IEEEP. Newman and K. Ho, "Slam-loop closing with visually salient features," in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on. IEEE, 2005, pp. 635-642. Online SLAM with any-time self-calibration and automatic change detection. N Keivan, G Sibley, IEEE International Conference on Robotics and Automation, ICRA 2015. Seattle, WA, USAN. Keivan and G. Sibley, "Online SLAM with any-time self-calibration and automatic change detection," in IEEE International Conference on Robotics and Automation, ICRA 2015, Seattle, WA, USA, 26-30 May, 2015, 2015, pp. 5775-5782.
[]
[ "Spin Transport Properties of Fractal and Non-Fractal Thinfilm", "Spin Transport Properties of Fractal and Non-Fractal Thinfilm" ]
[ "Cheng-Yen Ho \nDepartment of Physics\nNational Taiwan University\n10617TaipeiTaiwan\n", "Ching-Ray Chang \nDepartment of Physics\nNational Taiwan University\n10617TaipeiTaiwan\n" ]
[ "Department of Physics\nNational Taiwan University\n10617TaipeiTaiwan", "Department of Physics\nNational Taiwan University\n10617TaipeiTaiwan" ]
[]
Spatial behavior of spin transport in a Sierpinski gasket fractal is studied from two dimensions to quasi-one dimension subject to the Rashba spin-orbital coupling. With two normal metal leads represented by self-energy matrix, discretizing the derived continuous Hamiltonian to a tight-binding version, Landauer-Keldysh formalism for nonequilibrium transport can be applied. It was observed that the spin Hall effect presents in the distribution of spin density critically depends on the fractal structure and the shape of the thinfilm. The local spin density and transmission are numerically tested by the present quantum transport calculation for the fractal and non-fractal thinfilm varying from two-dimensional square-lattice into quasi-one dimensional fractal shape (Sierpinski triangles).
null
[ "https://arxiv.org/pdf/1109.6120v1.pdf" ]
117,104,865
1109.6120
bc989d6a14f79d728da1828725377620acaba934
Spin Transport Properties of Fractal and Non-Fractal Thinfilm 28 Sep 2011 Cheng-Yen Ho Department of Physics National Taiwan University 10617TaipeiTaiwan Ching-Ray Chang Department of Physics National Taiwan University 10617TaipeiTaiwan Spin Transport Properties of Fractal and Non-Fractal Thinfilm 28 Sep 2011(Dated: October 4, 2011)numbers: 7225b7363Nm7170Ej Spatial behavior of spin transport in a Sierpinski gasket fractal is studied from two dimensions to quasi-one dimension subject to the Rashba spin-orbital coupling. With two normal metal leads represented by self-energy matrix, discretizing the derived continuous Hamiltonian to a tight-binding version, Landauer-Keldysh formalism for nonequilibrium transport can be applied. It was observed that the spin Hall effect presents in the distribution of spin density critically depends on the fractal structure and the shape of the thinfilm. The local spin density and transmission are numerically tested by the present quantum transport calculation for the fractal and non-fractal thinfilm varying from two-dimensional square-lattice into quasi-one dimensional fractal shape (Sierpinski triangles). I. INTRODUCTION Intensive efforts on spin Hall effect (SHE) both experimentally and theoretically have successfully built another milestone in condensed matter physics 1 . Spin separation in semiconductors is not only possible but natural, so that manipulating spin properties of charge carriers in electronics is promising [2][3][4] . Recently different types of SHE were reported on different geometrical setup of both magnetic 5, 6 and non magnetic materials 7,8 . Local spin Hall effect often attributed as extrinsic SHE [9][10][11] , and intrinsic SHE 12 . The intrinsic SHE, spin separation is solely due to the underlying spin-orbit coupling in the band structure, so that SHE can exist even in systems free of scattering within a finite size system 13,14 . However, experimentally, most observations so far have been attributed to the extrinsic SHE. More recently nonlocal spin Hall effect was also found in various systems 5,11,15 , including graphene 6,16,17 . Considerable effort in this direction has already revealed the unique features of spinpolarized electron transport in the so-called two-terminal nano-or mesoscopic devices. Indeed, this is a fast developing field, and has also stimulated lot of theoretical work based on the non-equilibrium Green's function approach within the density functional theory [17][18][19] . A two-terminal device is essentially a single path device, SHE studies on a two-terminal device with defects show a lot interesting results and the local spin densities can be affected seriously even by a small point defects 20,21 for the breaking of translational symmetry. In the present studies we undertake an in-depth study of the spin transport in a fractal network based on the nonequilibrium Green's function approach, the Landauer-Keldysh formalism (LKF) [17][18][19] . We choose a fractal geometry following a Sierpinski gasket (SPG) 22,23 and examples of planar gasket is shown in Fig. 1. With the self-similarity of the system and the enlargement of the fractal structure, our studies provide an understanding of the spin transport behavior transition from two dimensional system to quasi-one dimensional system. Our motivation for the investigation of spin transport within fractal system is mainly resulted from our previous stud-ies on impurity system. The defect and impurities serious affect the local charge density and local spin density for the influences on spin transport, moreover, the relations between spin current and charge current within a film with defect behave very different from that within an ideal film 20,21 . Even though the point-defect induced translational symmetry breaking and interference among different conducting channels provide subtle reasons for the variation of the spin density and current density, the detailed analysis of defect and impurities of the multiterminal systems is very rare and the multiply connected fractal geometry is not even been probed. The state-ofart lithographic technologies made all artificial structures become possible. Therefore, the studies of spin transport of self-similarity of systems such as the SPG provide the possibility of investigating the local density of charge and spin on different multiply connected fractal geometry. Secondly, a very unique phenonemon in spin transport remains to be understood is the spin processional phase through different travelling channel, therefore, the quantum interference among the confined paths of the fractal structures will be an interesting and provide a good way of studying the self-similarity with reducing scale. Even though the charge transport on percolating clusters had been studied but the local charge density and local spin density, and the Aharonov-Bohm effect in a fractal system without translational invariance remains unclear till now 24,25 . Moreover, the density of transistors on an integrated circuit doubles every couple of years. The scale of devices is one of important factors. Fractal structures also can study the limits of the self-similarity with reducing scale for a profound 2D-system. The paper is organized as follows, we present Landauer-Keldysh formalism with a tight-binding framework to model the fractal system in Section II. Numerical results of local charge density and local spin density of different size of fractal system are studied and comparison among continuous film in low dimensional system also been discussed in Section III. We draw our conclusions in Section V. The Sierpinski gasket is named after the Polish mathematician Sierpinski in 1915 26 and is one of the well-known fractal geometry. Since fractal geometry was established in 1967, there are several ways to obtain Sierpinski gasket (SPG) 5, 6,[22][23][24][25]27,28 . Following by previous studies, we choose SPG as our sample and four generations of SPG are shown in Fig. 1. The SPG is perfectly self similar, an attribute of many fractal structures. Any triangular portion is an exact replica of the whole structure (Fig. 1). The dimension of the SPG is log3/log2 = 1.5849 and lies dimensionally between a line and a plane. B. The Landauer Keldysh formalism The Landauer Keldysh formalism is also called nonequilibrium Green's function (NEGF) formalism 29 and in principle carries all the physical information of the investigated under a non-equilibrium but approximately steady process 30 . It is usually used to simulate current and charge density in atomic-scale quantum mechanical transport when bias applied. Also it can be used to calculate spin density and transmission in different generations of sample. We describe the conductor by an array of numbered lattice sites, n = 1, 2, ..., N × N . For a 32 × 32 sample contains a square lattice of 1024 sites, as shown in Fig. 1. The corresponding Hamiltonian matrix is written in the site basis and each element of matrix H m,n records the relation between sites m and n. After we label the sites of lattice, we will show that the interaction between the sites by using the nearest neighbor hopping to describe the interaction, viz, the tight-binding model. With the help of the second quantization, the one-particle Hamiltonian composes of on-site terms and a hopping term, H = n ε n c † n c n + m,n c † n t m,n c n(1) where c † n (c n ) is the electron creation (annihilation) operator, ε n is the on-site energy at n-th site, and t m,n is the hopping matrix. Using the system of the Rashba spin-orbital interaction (RSI) 31 for example, the hopping matrix is t m,n = −t 0 I − ıt so ( σ × e n→m ) · e z where I is the 2 × 2 identity matrix, t 0 is the kinetic hopping strength and t so is the hopping strength of RSI. Applying the nearest hopping only, one writes H = n ε n c † n c n + m,n c † n (−t 0 I−ıt so ( σ×e n→m )·e z )c n (2) where m, n means m − th site interacts with the nearest sites n. Further, |r m − r n | = a with lattice spacing a is satisfied. Since the Hamiltonain has established, then considering the ideal (free of spin-orbital orbit interactions) leads are connecting with the sample left and right, respectively. Deriving the self-energy from Ref. 19 , we obtain the self-energy matrix of the left(right) lead, Σ lef t(right) . Next, using both the tight-binding Hamiltonian and lead self-matrices, one can build up the retarded Green function matrix G R (E) = [EI − H − Σ lef t (E) − Σ right (E)] −1(3) where the I is 2N × 2N identity matrix, H is the tight-binding Hamiltonian matrix of the conductor, and Σ lef t(right) (E) is the self-energy matrix of the left(right) lead. Next, the lesser self-energy matrix is given by Σ < (E) = − p=L,R [Σ p (E −eV p )−Σ † p (E −eV p )]f 0 (E −eV p ) (4) where f 0 is the Fermi function and eV p is the electric bias potential energy applied on lead p = {L(lef t), R(right)}. In the numerical calculation, we set eV lef t = +(eV 0 )/2 and eV right = −(eV 0 )/2. Moreover,we consider the zero temperature simplistically in order to ignore thermal fluctuation. Thus the Fermi function reduces to step function. Finally, we can get the lesser Green function via the kinetic equation G < (E) = G R (E)Σ < (E)G A (E) (5) where G A = {G R } † Thus we get the lesser Green's function G < which can be used to calculate the local charge density, e N n = e 2πi ∞ −∞ dE T r s [G < n,n ](6) where the trace calculates in spin space andN n is the electron number operator at site n. Also the local spin density will be S n = 2 1 2πi ∞ −∞ dE T r s [ σG < n,n ](7) here S n = (S x , S y , S z ) is spin operator at site n of the conductor. We assume our devices lay on x and y directions. Thus, the current goes along in x-direction. The study of the latter, the local spin density, can be categorized into the out-plane ( S z = S ⊥ ) and in-plane S x , S y = S components. And we set the hopping energy t 0 = 1, the mass of electron m e = 1, the charge of electron q = 1, and the Plank's constant divided by 2π, 2π = 1, as the units. We also can calculate transmission, T = T r s [Γ lef t G R Γ right G A ] (8) where Γ lef t(right) = ı[Σ lef t(right) − Σ † lef t(right) ] describes the coupling of the conductor to the two leads. III. NUMERICAL RESULTS AND DISCUSSIONS A. First Generation of SPG The first generation of SPG is setting in the band bottom E b = −4t 0 and the Fermi-level E f = −3.8t 0 . Here, the black regions of the sample are empty sites and the white regions are atomic sites for transporting charges and spins, as shown in Fig. 2. The width connects with the each two leads is 16a. From Fig. 2, under low bias situation, the charge transport separated into two channels and the local charge accumulate in the bottom channel. The first generation of SPG look like a combination of linear conducting line coupled with a localized quantum dot(QD). A QD is a part of material confined in three dimensions 32,33 . Fig. 2b clearly indicates the up-spin electrons mainly localized at QD-like region and the down-spin electrons is the conducting carrier for first generation of SPG. The separation of up-spin electrons and down-spin electrons are resulted from the coupling of RSI. Therefore, first generation of SPG behave as good spin filter under RSI. When raising into high bias, electrons with high energy injected from the lead and transport through the device. Now high energy electrons can go from the linear region into the QD-like region and then back to the linear region again (Fig. 3a). Therefore, charge did not localized in the QD-like region as in Fig. 2a, and conducting path penetrating into the QD-like region, i.e., the bottom of the device. Comparing with the low bias situation, both the down-spin and up-spin electrons can form conducting channels. While within the QD-like region, there are still mainly up-spin electrons and only a few down-spin electrons trapped in the center of the bottom of the device at high bias (Fig. 3b) In first generation of SPG, quantum dot-like region occurs at low bias. Beside the bottom square shape of the first generation of SPG is QD-like and spin-polarized electrons driven by the RSI will let the up-spin electrons trapped within the QD-like regions. Therefore, a spindependent potential also form within the QD region and thus enhance the confinemnt of up-spin electrons. Therefore, first generation of SPG behaves as good spin filter with RSI. B. Second Generation of SPG The second generation of SPG (Fig. 1b) is composed of three self-similar building blocks of first generation (Fig. 1a). There are empty sites (black regions) and atomic sites (white regions) in the device. The conditions are setting in the band bottom E b = −4t 0 and the Fermilevel E f = −3.8t 0 , as the same as first generation. However, the connecting width with two leads is different and is 8a instead of 16a. The second generation of SPG geometrically look like three first generation of SPG connects in series (Fig. 1a and Fig. 1b). However for low bias situation, the electrons are mainly accumulated in the bottom of the device in second generation of SPG (Fig. 4a). There is almost no clear conducting channel can be observed with the low bias. Additionally, comparing with the first generation of SPG, the second of SPG is topological different from the first generation of SPG with a hole at the center of fractal structure. As reported in Ref. 20 , a point-defect induced symmetry breaking significantly changes the conduction channel. The charge density seems to be in favor to pass through the bottom of the device, instead of through the upper line. The width of the top region of the device can change the conducting path of device 20,21 . The local charge density indicated the charge accumulates in corners of the building blocks in Fig. 4a. Local spin density also indicated that the spin actually associated with charge and accumulated at the same sites but with alternating polarization of the spin (Fig. 4b). Nevertheless, both local charge density and local spin density show that the second generation of SPG is much localized than the first generation of SPG. A possible reason is from the interference of spin-polarized electrons through the channels 34 around the central holes and thus the conducting paths almost disappear in low bias. It should be noted that there is also no clearly QD-like behavior in second generation. Focusing on S z , the 2 nd generation of SPG can be interpreted as the superposition of three building blocks of 1 st generation of SPG ( Fig. 2b and Fig. 2a) but with one bottom down-spin block and two up-spin blocks atop (Fig. 4b). As raising higher bias, QD-like behavior appear at the bottom building block of the device (Fig. 5b) but similar QD-like behavior does not occur in the top two building blocks. The condition of confinement is highly required without electron leaking. Here a higher bias did not provide enough energy for the electrons hopping into the bottom building block and also jumping out again. Therefore, a QD-like behavior only observed in bottom building block. Even though the geometrical similarity within SPG, the electrodes only connects to the left and right ends of the upper building blocks and thus the similarity of three building blocks was removed by the bias. Therefore, the top two blocks become conducting channels and the bottom one acts as QD-like region. We also noted that the Fermi wavelength and the unit length of building blocks are also critical for the conducting chan-nels, As the conducting distance between two leads equals to the multiple of Fermi wavelength, several oscillating periods of local spin density and local charge density can be form in SPG 35 . C. Third Generation of SPG The 3 rd generation is setting in the band bottom E b = −4t 0 and the Fermi-level E f = −3.39t 0 . Here the sites of the lattice are reduced much more (Fig. 1c) and the width connecting with two leads become 4a, so that the Fermi-level increases. In 3 rd generation, the central defect enlarges and there are also other smaller defects. Therefore there are many vortexes around the building blocks. Spin and charge are accumulated at the corners of building blocks. Even though the geometrical self-similarity of the 3 rd generation of SPG, due to the complicated paths, the transports of self-similarity of the 1 st and 2 nd generation of SPG are hardly to see even under high bias in Fig. 7b. Under low bias, the charge density appears only in the left of the device and it indicates that the bias is weak enough to drive the charge to the other end of SPG for too many empty sites (Fig. 6a). For a high bias, local charge density is around the empty sites, however, it does not directly go along the narrow line to the other end (Fig. 7a). It also noted that the local spin density shows only the up-spin electrons were selected to transport within this downward triangular SPG (Fig. 7b). D. Fourth Generation of SPG The fourth generation is setting in the band bottom E b = −4t 0 . Because reducing the sites of the sample and the width, 2a, connecting with two leads, the Fermi-level moves to −2.9t 0 . Under the low bias, the spins and the charges transport hardly in the region of the upper line in Fig. 8. The self-similar feature of the fractal is not observed at all. Under high bias, there are accumulated charges and spins in the corners of building blocks of the sample in Fig. 9. Similar selection of spin orientation was observed in this downward triangular SPG. E. Transmission In a 2D square sample, the transmission starts from −4t 0 shown as blue line in Fig. 10a. For 1 st generation of SPG (brown) and 2 nd generation of SPG (purple), the transmissions are almost the same as in square sample ( Fig. 10a) but with curve slope change to linear dependence of energy. The lower threshold of transmission of 3 rd generation of SPG shifts from −4t 0 to −3.39t 0 in Fig. 10b, and the transmission of 4 th generation shifts further to −2.9t 0 . Our results indicated that the transmissions of fractal structure are affected both by the shape of the sample and the defects within the samples seriously. For 1 st generation of SPG, the leads contact the sample with 16 sites (16a) on both left and right. For 2 nd generation, the left and right leads contact the sample only with 8 sites (8a). Since the width of two leads are comparatively wider than the other higher generation of SPG, and thus the spin-polarized electrons can be through two leads smoothly. For 3 rd generation of SPG, the two leads are in contact with only 4 sites (4a) in each side and the transmission reduces heavily. It should be noted that the central defects of the higher generation of SPG make their transport behavior is topological different from the lower generation of SPG. The threshold of the transmission shifts from −4t 0 to −2.9t 0 with increasing number of defects, i.e., a higher generation of SPG. For the 4 th generation of SPG, a visible gap even appears within −2.7t 0 to −2.5t 0 (Fig. 10b) and this arises from the self-similarity of SPGs. For the higher generation of SPG, the contact sites with leads reduces and the transmission also reduces accordingly for the narrowing of the conducting channel and most charges and spins are trapped at the bottom of fractal structure. A self-similarity behavior of transmission can be observed from 2 nd generation of SPG for the topological difference with the lower generation of SPG (Fig. 10b). IV. CONCLUSION In conclusion, we had studied spin transport in a planar Sierpinski gasket fractal formed by elementary triangle unit. We also compare the local charge density, the local spin density and also spin Hall effect on the planar Sierpinski gasket with a continuous planar and linear structure. For four different planar Sierpinski gasket fractal structures in our analysis, with a low bias, there are little fractal-like patterns in first generation and second generation. Moreover, because of too many holes on the third generation and fourth generation, the electrons become very localized and transport can be only observed along the linear edge. Therefore, hardly any fractal patterns were observed for both the local spin density and local charge density. Indeed there is quantum dot region was observed in first generation ( Fig. 2 and Fig. 3) and also barely was observed in second generation (Fig. 5). The behavior can be considered to be a similar arrangement of single level QDs 36,37 sitting at the vertices of a continuous line. Also the spin density and the charge density indicate that the transmissions in fractal structures are smaller than linear structures. And this is consistent with the results of transmission. A very interesting phenomenon was observed in our studies, due to the fractal shape, spin transport can be separated from the charge transport and details mechanism still need to be studied (Fig. 6b). It should be noted that the asymmetry of a planar Sierpinski gasket fractal could trap the spin in the bottom from the spin Hall effect. The up and down spinpolarized electrons accumulate on the wide and narrow edges of a planar Sierpinski gasket fractal and thus the up and down spin electrons can then be selected from the asymmetrical shape. Under the appropriate condition, there is only one kind of spin electron can pass through the asymmetrical device. This geometrical selection provides a feasible way of producing pure spin current. The transmissions of four different generations are calculated numerically. As reducing the sites of the devices, the widths connection with two leads also decrease. From the analysis of different generation of SPG, it was found that the contact widths of the leads determine the upper bound of the transmission. The transmission affected by the connecting width with two leads and the threshold of the transmission depends on the number of defects. First generation of SPG. Second generation of SPG. Third generation of SPG. Fourth generation of SPG. FIG. 1 : 1(Color online) Fractal structures of SPGs: an example of planar gasket with the self-similarity of the system. Four SPGs are attached to two semi-infinite metallic leads, left lead and right lead, on the edges (yellow lines). All SPGS are 32 x 32 square lattice and the white regions are the atomic sites. The widths connects with the each two leads are (a)16a in first generation of SPG, (b)8a in second generation of SPG, (c)4a in third generation of SPG and (d)2a in fourth generation of SPG. FIG. 2 : 2(Color online) Local spin density and charge density under a low bias in first generation. InFig. 2b, there is a linear wire interacting with a quantum dot. FIG. 3 : 3(Color online) Local spin density and charge density under a strong bias in first generation. FIG. 4 : 4(Color online) Local spin density and charge density under a low bias in second generation. FIG. 5 : 5(Color online) Local spin density and charge density under a strong bias in second generation. FIG. 6 : 6(Color online) Local spin density and charge density under a low bias and E f = −3.69 in third generation. FIG. 7 : 7(Color online) Local spin density and charge density under a strong bias and E f = −3.39 in third generation. FIG. 8 :FIG. 9 : 89(Color online) Local spin density and charge density under a low bias and E f = −2.9 in fourth generation. (Color online) Local spin density and charge density under a strong bias and E f = −2.9 in fourth generation. FIG. 10: (Color online) Transmission of SPG versus bias energy. The bias energy is normalized with the hopping energy t0. (a) The transmission of a 2D square reference sample (blue line) starts from around −4t0 and then increases with energy. The non-zero transmission of higher generation of SPG shifts to smaller values. (b) Zoom in on the higher generation of SPGs with the central defects and the transmission was suppressed significantly. Note that the threshold of higher generation of SPGs also shifts to left and the central triangle defects are with effect in third generation (green dagger-line) and fourth generation (red triangle-line). ACKNOWLEDGMENTSWe appreciate the Taiwan National Science Council Grant No. NSC 98-2112-M-002-012-MY3 for supporting . C S Tang, A G , K A Chao, http:/link.aps.org/doi/10.1103/PhysRevB.71.195314Phys. Rev. B. 71195314C. S. Tang, A. G. Mal'shukov, and K. A. Chao, Phys. Rev. B 71, 195314 (2005), URL http://link.aps.org/doi/10.1103/PhysRevB.71.195314. . J Lodder, D Monsma, R Vlutters, T Shimatsu, ; G Tatara, H Kohno, http:/link.aps.org/doi/10.1103/PhysRevLett.92.0866010304-8853Journal of Magnetism and Magnetic Materials. 11986601Phys. Rev. Lett.J. Lodder, D. Monsma, R. Vlutters, and T. Shi- matsu, Journal of Magnetism and Magnetic Mate- rials 198-199, 119 (1999), ISSN 0304-8853, URL http://www.sciencedirect.com/science/article/pii/S0304885398012414. 3 G. Tatara and H. Kohno, Phys. Rev. Lett. 92, 086601 (2004), URL http://link.aps.org/doi/10.1103/PhysRevLett.92.086601. . E G Mishchenko, A V Shytov, B I Halperin, ; L Sun, C Fang, Y Guo, 10.1063/1.348864700218979Phys. Rev. Lett. 9363715E. G. Mishchenko, A. V. Shytov, and B. I. Halperin, Phys. Rev. Lett. 93, 226602 (2004), URL http://link.aps.org/doi/10.1103/PhysRevLett.93.226602. 5 L. Sun, C. Fang, and Y. Guo, 108, 063715 (2010), ISSN 00218979, URL http://dx.doi.org/doi/10.1063/1.3488647. . L Sun, C Fang, Y Song, Y Guo, Journal of Physics: Condensed Matter. 22445303L. Sun, C. Fang, Y. Song, and Y. Guo, Journal of Physics: Condensed Matter 22, 445303 (2010), URL http://stacks.iop.org/0953-8984/22/i=44/a=445303. . T Kimura, Y Otani, T Sato, S Takahashi, S Maekawa, http:/link.aps.org/doi/10.1103/PhysRevLett.106.157208Phys. Rev. Lett. 8 A. Fert and P. M. Levy98157208Phys. Rev. Lett.T. Kimura, Y. Otani, T. Sato, S. Takahashi, and S. Maekawa, Phys. Rev. Lett. 98, 156601 (2007), URL http://link.aps.org/doi/10.1103/PhysRevLett.98.156601. 8 A. Fert and P. M. Levy, Phys. Rev. Lett. 106, 157208 (2011), URL http://link.aps.org/doi/10.1103/PhysRevLett.106.157208. . Y K Kato, R C Myers, A C Gossard, D D Awschalom, http:/link.aps.org/doi/10.1103/PhysRevLett.45.855Science. 306Y. K. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, Science 306, 1910 (2004), http://www.sciencemag.org/content/306/5703/1910.full.pdf, URL http://www.sciencemag.org/content/306/5703/1910.abstract. . V Sih, R C Myers, Y K Kato, W H Lau, A C Gossard, D D Awschalom, 10.1038/nphys009Nat Phys. 131V. Sih, R. C. Myers, Y. K. Kato, W. H. Lau, A. C. Gos- sard, and D. D. Awschalom, Nat Phys 1, 31 (2005), URL http://dx.doi.org/10.1038/nphys009. . T Seki, Y Hasegawa, S Mitani, S Takahashi, H Imamura, S Maekawa, J Nitta, K Takanashi, 10.1038/nmat2098Nat Mater. 7T. Seki, Y. Hasegawa, S. Mitani, S. Taka- hashi, H. Imamura, S. Maekawa, J. Nitta, and K. Takanashi, Nat Mater 7, 125 (2008), URL http://dx.doi.org/10.1038/nmat2098. . J Wunderlich, B Kaestner, J Sinova, T Jungwirth, http:/link.aps.org/doi/10.1103/PhysRevLett.94.047204Phys. Rev. Lett. 9447204J. Wunderlich, B. Kaestner, J. Sinova, and T. Jung- wirth, Phys. Rev. Lett. 94, 047204 (2005), URL http://link.aps.org/doi/10.1103/PhysRevLett.94.047204. . J Inoue, G E W Bauer, L , J.-i. Inoue, G. E. W. Bauer, and L. W. . Molenkamp, http:/link.aps.org/doi/10.1103/PhysRevB.70.041303Phys. Rev. B. 7041303Molenkamp, Phys. Rev. B 70, 041303 (2004), URL http://link.aps.org/doi/10.1103/PhysRevB.70.041303. . M.-H Liu, C.-R Chang, http:/link.aps.org/doi/10.1103/PhysRevB.82.155327Phys. Rev. B. 82155327M.-H. Liu and C.-R. Chang, Phys. Rev. B 82, 155327 (2010), URL http://link.aps.org/doi/10.1103/PhysRevB.82.155327. . A Roth, C Brüne, H Buhmann, L W Molenkamp, J Maciejko, X.-L Qi, S.-C Zhang, Science. 325A. Roth, C. Brüne, H. Buhmann, L. W. Molenkamp, J. Maciejko, X.-L. Qi, and S.-C. Zhang, Science 325, 294 (2009), http://www.sciencemag.org/content/325/5938/294.full.pdf, URL http://www.sciencemag.org/content/325/5938/294.abstract. . D A Abanin, S V Morozov, L A Ponomarenko, R V Gorbachev, A S Mayorov, M I Katsnelson, K Watanabe, T Taniguchi, K S Novoselov, L S Levitov, Science. 332D. A. Abanin, S. V. Morozov, L. A. Ponomarenko, R. V. Gorbachev, A. S. Mayorov, M. I. Katsnel- son, K. Watanabe, T. Taniguchi, K. S. Novoselov, L. S. Levitov, et al., Science 332, 328 (2011), http://www.sciencemag.org/content/332/6027/328.full.pdf, URL http://www.sciencemag.org/content/332/6027/328.abstract. . M.-H Liu, S.-H Chen, C.-R Chang, http:/link.aps.org/doi/10.1103/PhysRevB.78.165316Phys. Rev. B. 78165316M.-H. Liu, S.-H. Chen, and C.-R. Chang, Phys. Rev. B 78, 165316 (2008), URL http://link.aps.org/doi/10.1103/PhysRevB.78.165316. . B K Nikolić, S Souma, L P Zârbo, J Sinova, Phys. Rev. Lett. 9546601B. K. Nikolić, S. Souma, L. P. Zârbo, and J. Sinova, Phys. Rev. Lett. 95, 046601 (2005). S Datta, Electronic Transport in Mesoscopic Systems. Cambridge University PressS. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University Press, 1998). . S.-H Chen, I Klik, C.-R Chang, Journal of Applied Physics. 1053S.-H. Chen, I. Klik, and C.-R. Chang, Journal of Applied Physics 105, 07E908 (pages 3) (2009), URL http://link.aip.org/link/?JAP/105/07E908/1. . S.-H Chen, I Klik, C.-R Chang, Journal of Magnetism and Magnetic Materials. 3221452S.-H. Chen, I. Klik, and C.-R. Chang, Journal of Mag- netism and Magnetic Materials 322, 1452 (2010), URL http://www.sciencedirect.com/science/article/pii/S03048853090 . R F S Andrade, H J Schellnhuber, Europhysics Letters). 10EPLR. F. S. Andrade and H. J. Schellnhuber, EPL (Europhysics Letters) 10, 73 (1989), URL http://stacks.iop.org/0295-5075/10/i=1/a=013. . E Domany, S Alexander, D Bensimon, L P Kadanoff, http:/link.aps.org/doi/10.1103/PhysRevB.28.3110Phys. Rev. B. 283110E. Domany, S. Alexander, D. Bensimon, and L. P. Kadanoff, Phys. Rev. B 28, 3110 (1983), URL http://link.aps.org/doi/10.1103/PhysRevB.28.3110. . L Niemeyer, L Pietronero, H J Wiesmann, http:/link.aps.org/doi/10.1103/PhysRevLett.52.1033Phys. Rev. Lett. 521033L. Niemeyer, L. Pietronero, and H. J. Wies- mann, Phys. Rev. Lett. 52, 1033 (1984), URL http://link.aps.org/doi/10.1103/PhysRevLett.52.1033. . S Satpathy, http:/link.aps.org/doi/10.1103/PhysRevLett.57.649Phys. Rev. Lett. 57649S. Satpathy, Phys. Rev. Lett. 57, 649 (1986), URL http://link.aps.org/doi/10.1103/PhysRevLett.57.649. . W Sierpinski, C R Acad, 302Paris IVW. Sierpinski, C. R. Acad., Paris IV, 302 (1915). . Y Gefen, B B Mandelbrot, A Aharony, http:/link.aps.org/doi/10.1103/PhysRevLett.45.855Phys. Rev. Lett. 45855Y. Gefen, B. B. Mandelbrot, and A. Aharony, Phys. Rev. Lett. 45, 855 (1980), URL http://link.aps.org/doi/10.1103/PhysRevLett.45.855. The Fractal Geometry of Nature. B B Mandelbrot, WH FreemanB. B. Mandelbrot, The Fractal Geometry of Nature (WH Freeman, 1982). . L Keldysh, Sov. Phys. JETP. 2068L. Keldysh, Sov. Phys. JETP 20, 67,68 (1965). . S Datta, 0749-6036Superlattices and Microstructures. 28S. Datta, Superlattices and Microstructures 28, 253 (2000), ISSN 0749-6036, URL http://www.sciencedirect.com/science/article/pii/S07496036009 . Y A Bychkov, E I Rashba, Journal of Physics C: Solid State Physics. 17Y. A. Bychkov and E. I. Rashba, Journal of Physics C: Solid State Physics 17, 6039 (1984), URL http://stacks.iop.org/0022-3719/17/i=33/a=015. . A I Ekimov, A A Onushchenko, Soviet Journal of Experimental and Theoretical Physics Letters. 34345A. I. Ekimov and A. A. Onushchenko, Soviet Journal of Ex- perimental and Theoretical Physics Letters 34, 345 (1981). . C B Murray, C R Kagan, M G Bawendi, Annual Review of Materials Science. 30545C. B. Murray, C. R. Kagan, and M. G. Bawendi, Annual Review of Materials Science 30, 545 (2000). . S.-H Chen, C.-R Chang, http:/link.aps.org/doi/10.1103/PhysRevB.77.045324Phys. Rev. B. 7745324S.-H. Chen and C.-R. Chang, Phys. Rev. B 77, 045324 (2008), URL http://link.aps.org/doi/10.1103/PhysRevB.77.045324. . S.-H Chen, M.-H Liu, K.-W Chen, C.-R Chang, Journal of Applied Physics. 1033S.-H. Chen, M.-H. Liu, K.-W. Chen, and C.-R. Chang, Journal of Applied Physics 103, 07B721 (pages 3) (2008), URL http://link.aip.org/link/?JAP/103/07B721/1. . B Kubala, J König, http:/link.aps.org/doi/10.1103/PhysRevB.65.245301Phys. Rev. B. 65245301B. Kubala and J. König, Phys. Rev. B 65, 245301 (2002), URL http://link.aps.org/doi/10.1103/PhysRevB.65.245301. . K.-W Chen, C.-R Chang, http:/link.aps.org/doi/10.1103/PhysRevB.78.235319Phys. Rev. B. 78235319K.-W. Chen and C.-R. Chang, Phys. Rev. B 78, 235319 (2008), URL http://link.aps.org/doi/10.1103/PhysRevB.78.235319.
[]
[ "REACHING THE OLDEST STARS BEYOND THE LOCAL GROUP: ANCIENT STAR FORMATION IN UGC 4483", "REACHING THE OLDEST STARS BEYOND THE LOCAL GROUP: ANCIENT STAR FORMATION IN UGC 4483" ]
[ "Elena Sacchi [email protected] \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n\nLeibniz-Institut für Astrophysik Potsdam\nAn der Sternwarte 1614482PotsdamGermany\n\nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n", "Alessandra Aloisi \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n", "Matteo Correnti \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n", "Francesca Annibali \nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n", "Monica Tosi \nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n", "Alessia Garofalo \nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n", "Gisella Clementini \nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n", "Michele Cignoni \nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n\nDipartimento di Fisica\nUniversità di Pisa\nLargo Bruno Pontecorvo, 356127PisaItaly\n\nINFN, Sezione di Pisa\nLargo Pontecorvo 356127PisaItaly\n", "Bethan James \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n", "Marcella Marconi \nINAF-Osservatorio Astronomico di Capodimonte\nSalita Moiariello 1680131NaplesItaly\n", "Tatiana Muraveva \nINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly\n", "Roeland Van Der Marel \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n\nCenter for Astrophysical Sciences\nDepartment of Physics & Astronomy\nJohns Hopkins University\n21218BaltimoreMDUSA\n" ]
[ "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "Leibniz-Institut für Astrophysik Potsdam\nAn der Sternwarte 1614482PotsdamGermany", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "Dipartimento di Fisica\nUniversità di Pisa\nLargo Bruno Pontecorvo, 356127PisaItaly", "INFN, Sezione di Pisa\nLargo Pontecorvo 356127PisaItaly", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "INAF-Osservatorio Astronomico di Capodimonte\nSalita Moiariello 1680131NaplesItaly", "INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna\nVia Gobetti 93/3, I-40129BolognaItaly", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "Center for Astrophysical Sciences\nDepartment of Physics & Astronomy\nJohns Hopkins University\n21218BaltimoreMDUSA" ]
[]
We present new WFC3/UVIS observations of UGC 4483, the closest example of a metal-poor blue compact dwarf galaxy, with a metallicity of Z 1/15 Z and located at a distance of D 3.4 Mpc.The extremely high quality of our new data allows us to clearly resolve the multiple stellar evolutionary phases populating the color-magnitude diagram (CMD), to reach more than 4 mag deeper than the tip of the red giant branch, and to detect for the first time core He-burning stars with masses 2 M , populating the red clump and possibly the horizontal branch (HB) of the galaxy. By applying the synthetic CMD method to our observations, we determine an average star formation rate over the whole Hubble time of at least (7.01 ± 0.44) × 10 −4 M /yr, corresponding to a total astrated stellar mass of (9.60 ± 0.61) × 10 6 M , 87% of which went into stars at epochs earlier than 1 Gyr ago. With our star formation history recovery method we find the best fit with a distance modulus of DM = 27.45 ± 0.10, slightly lower than previous estimates. Finally, we find strong evidence of an old ( 10 Gyr) stellar population in UGC 4483 thanks to the detection of an HB phase and the identification of six candidate RR Lyrae variable stars.
10.3847/1538-4357/abea16
[ "https://arxiv.org/pdf/2102.13119v1.pdf" ]
246,471,426
2102.13119
33842fb2f8377fa98c6b25eca253c64bd7aa52f5
REACHING THE OLDEST STARS BEYOND THE LOCAL GROUP: ANCIENT STAR FORMATION IN UGC 4483 March 1, 2021 Elena Sacchi [email protected] Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Leibniz-Institut für Astrophysik Potsdam An der Sternwarte 1614482PotsdamGermany INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Alessandra Aloisi Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Matteo Correnti Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Francesca Annibali INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Monica Tosi INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Alessia Garofalo INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Gisella Clementini INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Michele Cignoni INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Dipartimento di Fisica Università di Pisa Largo Bruno Pontecorvo, 356127PisaItaly INFN, Sezione di Pisa Largo Pontecorvo 356127PisaItaly Bethan James Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Marcella Marconi INAF-Osservatorio Astronomico di Capodimonte Salita Moiariello 1680131NaplesItaly Tatiana Muraveva INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna Via Gobetti 93/3, I-40129BolognaItaly Roeland Van Der Marel Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Center for Astrophysical Sciences Department of Physics & Astronomy Johns Hopkins University 21218BaltimoreMDUSA REACHING THE OLDEST STARS BEYOND THE LOCAL GROUP: ANCIENT STAR FORMATION IN UGC 4483 March 1, 2021Draft version Preprint typeset using L A T E X style emulateapj v. 12/16/11 Draft version March 1, 2021galaxies: dwarf -galaxies: irregular -galaxies: evolution -galaxies: individual (UGC 4483) -galaxies: star formation -galaxies: stellar content -galaxies: starburst We present new WFC3/UVIS observations of UGC 4483, the closest example of a metal-poor blue compact dwarf galaxy, with a metallicity of Z 1/15 Z and located at a distance of D 3.4 Mpc.The extremely high quality of our new data allows us to clearly resolve the multiple stellar evolutionary phases populating the color-magnitude diagram (CMD), to reach more than 4 mag deeper than the tip of the red giant branch, and to detect for the first time core He-burning stars with masses 2 M , populating the red clump and possibly the horizontal branch (HB) of the galaxy. By applying the synthetic CMD method to our observations, we determine an average star formation rate over the whole Hubble time of at least (7.01 ± 0.44) × 10 −4 M /yr, corresponding to a total astrated stellar mass of (9.60 ± 0.61) × 10 6 M , 87% of which went into stars at epochs earlier than 1 Gyr ago. With our star formation history recovery method we find the best fit with a distance modulus of DM = 27.45 ± 0.10, slightly lower than previous estimates. Finally, we find strong evidence of an old ( 10 Gyr) stellar population in UGC 4483 thanks to the detection of an HB phase and the identification of six candidate RR Lyrae variable stars. 1. INTRODUCTION Star formation (SF) studies beyond the Local Group have been pushed to their current limits in the past few years. Thanks to the spatial resolution and sensitivity of the Hubble Space Telescope (HST ), it is possible to resolve single stars in galaxies up to ∼ 20 Mpc, with the limitation of characterizing only the brightest stellar evolutionary features, i.e., the upper main sequence (MS), the He-burning phase of massive and intermediate-mass stars (blue loops, BLs), the asymptotic giant branch (AGB), and the red giant branch (RGB). Even though the RGB can host stars with any age between 1 − 2 and 13 Gyr, it is unfortunately affected by an age-metallicity degeneracy; this implies that only the most recent star formation history (SFH), back to 1 − 2 Gyr ago, can be derived with good time resolution (∼ 10%), while a precise characterization of the SF behaviour is not possible at the oldest epochs. So, why do we even go through the trouble of studying such distant systems, with all the uncertainties associated with them? Beyond removing the environmental effects that the big spirals inside the Local Group have on smaller systems, and studying dwarf galaxies in isolation, a key reason is that a particular sub-group of the dwarf Based on observations obtained with the NASA/ESA Hubble Space Telescope at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA Contract NAS 5-26555. class is not present inside the Local Group: blue compact dwarf (BCD) galaxies. These are extremely interesting systems, most often characterized by bluer colors, more intense star formation activities, and higher central surface brightness compared to regular star-forming dwarfs. Because of their recent or ongoing bursts of SF and typical low metallicities, for a long time they were believed to be young galaxies, but all the systems studied in details through their color-magnitude diagrams (CMDs) and SFHs revealed a population of RGB stars, thus at least 1 − 2 Gyr old, and possibly as old as 13.7 Gyr (see, e.g., Schulte-Ladbeck et al. 2001;Annibali et al. 2003;Aloisi et al. 2007;McQuinn et al. 2015). All the studies conducted so far on BCDs are limited by the depth of the photometry, reaching 1 − 2 magnitudes below the RGB tip (TRGB), which allows to robustly reconstruct the SFH up to a few Gyr ago only. To make progress, it is necessary to reach fainter features with discriminatory power at older ages. The brightest signatures of several Gyr old stars are the red clump (RC) and horizontal branch (HB), both corresponding to core He-burning evolution phases of stars with masses 2 M . In particular, despite the uncertainties on which other parameters actually affect the color extension of the HB (e.g., alpha-element content, internal dynamics, etc.), the potential detection of a blue HB (i.e., stars with M 1 M at M I 0 mag) would unambiguously indicate the presence of a population that is both old ( 10 Gyr) and metal-poor ([Fe/H] −1.5 dex). Old metal-poor HB stars can cross the instability strip and produce RR Lyrae (RRLs) type pulsating variables, so the detection of such stars allows to unambiguously trace the signature of a ∼ 10 Gyr old population throughout a galaxy (see, e.g., Clementini et al. 2003) even when the magnitude level of the HB is close to the detection limit. Here we present a detailed analysis of UGC 4483, the closest example of metal-poor BCD galaxy, located between the bright spirals M81 and NGC 2403 at a distance of D = 3.4 ± 0.2 Mpc, corresponding to a distance modulus of DM = 27.63 ± 0.12 (Izotov & Thuan 2002), and with an oxygen abundance of 12+log(O/H) = 7.56±0.03 (van Zee & Haynes 2006), corresponding to Z 1/15 Z (using 8.73 for the solar oxygen abundance, Caffau et al. 2015). Its cometary shape (see Figure 1) resembles that of SBS 1415+437 (Aloisi et al. 2005), which also has a very similar metal content. From Hi 21 cm observations, Thuan & Seitzer (1979) derived a gas mass of M Hi = 4.1 × 10 7 M , and a gas fraction which corresponds to 39% of all visible mass. The Hi mass from Lelli et al. (2012b) is M Hi = 2.5 × 10 7 M , and they also find a steeply-rising rotation curve that flattens in the outer parts, making UGC 4483 the lowest-mass galaxy with a differentially rotating Hi disk. This steep rise of the rotation curve indicates a strong central concentration of mass, a property which seems to be typical of BCDs. UGC 4483 has already been resolved into stars with the HST /WFPC2 (Dolphin et al. 2001;Izotov & Thuan 2002;Odekon 2006;McQuinn et al. 2010). The I vs. V − I CMDs reveal a young stellar population of blue MS stars as well as blue and red supergiants associated with the bright Hii region at the northern tip of the galaxy (see Figure 1 and Region 0a of Figure 2). An older evolved population was found throughout the whole low surface brightness body of the galaxy, as indicated by very bright AGB stars and the tip of a very blue RGB. However, those data were not deep enough to discriminate between a relatively metal-poor population with an age of ∼ 10 Gyr for the RGB/AGB stars, or a somewhat higher metallicity and an age of ∼ 2 Gyr. We present here new WFC3/UVIS observations reaching more than 4 mag deeper than the TRGB, to detect and characterize the RC and/or HB populations of the galaxy. These new data strongly constrain both the age and metallicity properties of the SFH of UGC 4483 back to many, possibly 10, Gyr ago. The RC absolute magnitude (near M I −0.5 mag) depends sensitively on the age of the stellar population, and it is ∼ 1 mag brighter for a 1 Gyr old population than for a 10 Gyr old population (see, e.g., figure 23 of Rejkuba et al. 2005). The dependence of the RC magnitude on metallicity is not strong, and either way, is different than the dependence of the RGB color on age and metallicity. Therefore, a joint determination of the RGB color and RC magnitude constrains the age and metallicity of a stellar population independently (see, e.g., NGC 1569 at a similar distance; Grocholski et al. 2012). The quality of our new data allows us to finally analyze a very deep CMD of UGC 4483, and to reach lower-mass stars, thus, older stellar features (i.e., the RC and HB) beyond the edge of the Local Group. 2. OBSERVATIONS AND DATA REDUCTION Observations of UGC 4483 were performed on January 2019 (Visits 1 and 2) and February 2019 (Visit 3) using the UVIS channel of the WFC3 as part of the HST program GO-15194 (PI: A. Aloisi). Despite the smaller field of view, we preferred WFC3/UVIS over ACS/WFC because of the slightly higher resolution that provides a better photometry in the most crowded central regions of the galaxy. The target was centered on one of the two CCD chips of the WFC3/UVIS camera, in order to minimize the impact of chip-dependent zeropoints and to avoid the loss of the galaxy central region due to the CCD chip gap. The observations were obtained in the two broad-band filters F606W and F814W. We also required narrow-band imaging in F656N to study the distribution of the ionized gas. We selected the broader F606W instead of F555W as the blue filter, because it provides the best compromise between a reasonable exposure time (a factor of ∼ 2 shorter for F606W than for F555W) and the achievement of our science goals. During Visits 1 and 2, 14 long exposures (2580 sec each) were taken in the F606W filter (for a total exposure time of 36120 sec), and 8 in the F814W filter (for a total exposure time of 20640 sec). Twelve additional exposures with the same integration time (2580 sec each, for a total exposure time of 30960 sec) were collected during Visit 3 in the F814W filter. Those exposure times were chosen in order to achieve a signal-to-noise ratio of ∼ 10, at the HB magnitude level, which corresponds to I ∼ 27.7 mag at the distance of UGC 4483. The exposures were executed with a spatial offset, using a dithering pattern of an integer+fraction of pixels, in order to move across the gap between the two chips, to simplify the identification and removal of bad/hot pixels and cosmic rays, and to improve the point spread function (PSF) sampling. It is worth mentioning that the observations were performed in the Continuous Viewing Zone (CVZ) of HST, which allowed us to observe for the entire 96 minute orbit. This resulted in a very deep photometry even in just 18 orbits. To reduce the images, we followed the same procedure outlined in Annibali et al. (2019). First, the calibrated .flc science images were retrieved from the HST archive. The .flc images are the products of the calwf3 data reduction pipeline and constitute the bias-corrected, dark-subtracted, flat-fielded, and charge transfer inefficiency corrected images. Then, we combined together the individual .flc images into a single drizzled, stacked, and distortion-corrected image (.drc image) using the Drizzlepac software (Gonzaga et al. 2012). To do so, all the images in the same filter were first aligned using the software TweakReg and then, using the software AstroDrizzle, bad pixels and cosmic rays were flagged and rejected from the input images. Finally, those input undistorted and aligned images were combined together into a final stacked image. In the left panel of Figure 1 we show a 3-color composite image of UGC 4483 from our HST /WFC3 observations: blue corresponds to F606W (broad V ), red to F814W (I), while the green channel was obtained using the mean of the two. The right panel shows instead the F656N image of the galaxy (please note that given its very low S/N, the Hα image has not been continuum subtracted.) Hi contours from the VLA-ANGST survey (Ott et al. 2012) corresponding to column densities of N H = 3.0, 2.5, 2.0, 1.5, 1.0, and 0.5 × 10 21 cm −2 have been superimposed to both images. Nebular gas emission can contaminate broad-band photometry, but as shown by the right panel of Figure 1, outside of the most active 0a and 0b regions (as defined in Figure 2), the contamination is negligible. PSF stellar photometry was performed using the latest version of Dolphot (Dolphin 2000, and numerous subsequent updates). After the usual pre-processing required by the software and performed on each single science image (i.e., creation of bad pixel mask and generation of sky frame), iterative PSF photometry was performed simultaneously on the .flc images using the F606W .drc image as the reference frame for alignment. The different Dolphot parameters that govern alignment and photometry were set as in Annibali et al. (2019), adopting a hybrid combination between the values recommended by the Dolphot manual and those adopted in Williams et al. (2014). Together with the positions and photometry of the individual stars, the final photometric catalog contains several diagnostic parameters which are useful to exclude remaining artifacts and spurious detections. Hence, we selected from the total catalog all the objects with the Dolphot "object type" flag = 1, and then we applied a series of consecutive selection cuts using the remaining diagnostics (i.e., error, sharpness, roundness, and crowding). The final clean catalog contains ∼ 14 000 stars, and the corresponding CMD is shown in Figure 3. We also analyzed the coordinated parallel images obtained with the ACS/WFC, which in principle could give us rich information about the stellar population of the halo of the galaxy. However, the resulting CMD does not contain any sources we could link to UGC 4483, but only background contamination; unfortunately, these fields are probably already too far away from the galaxy to include its halo. This suggests that the halo of UGC 4483 does not reach as far as the ACS field, i.e. is smaller than ∼ 6 kpc. To properly analyze the spatial variations of the SF within the galaxy, we divided our final catalog in six sub-regions, as shown in Figure 2, following the isophotal contours of the F606W image. The innermost regions, 0a and 0b, correspond to the most active areas of the galaxy, where we see Hα emission from the Hii regions. follow the Hα, which can be due to the fact that the recently formed massive stars ionized the gas (thus we see Hα emission) leaving less neutral gas in their surroundings (and also external effects like a recent interaction might have influenced the Hi gas distribution). Figure 3 shows the F814W versus F606W−F814W CMD of UGC 4483. We notice the extremely high quality of these data, with the main stellar evolutionary phases clearly recognizable and well separated in the diagram. The CMD shows a continuity of stellar populations, suggesting a SF that spans from ancient to recent epochs; we can identify two well separated sequences for MS stars at m F606W − m F814W 0 and stars at the blue edge of the BLs (core He-burning intermediate-and high-mass stars) at m F606W − m F814W ∼ 0.2, the diagonal feature produced by stars at the red edge of the BLs at m F606W − m F814W 0.5 (well separated from the RGB), and a few carbon stars and thermally pulsing asymptotic giant branch (TP-AGB) stars, outlining the horizontal feature around m F606W − m F814W 1 and m F814W 23; older stars are enclosed in the RGB with its clearly defined tip at m F814W ∼ 23.7, in the RC, and possibly in the HB, our oldest age signature. To guide the eye and explore the age and metallicity of these populations, we over-plot the MIST isochrones (Choi et al. 2016;Dotter 2016) for four different metallicities ([Fe/H] = −2.9, −2.0, −1.5, −1.0) on the CMD ( Figure 4). As a reference, from the oxygen abundance of the Hii regions we obtain a current metallicity [Fe/H] = −1.2. The isochrones were shifted according to galaxy's distance modulus of DM = 27.45 (which is what we obtain from out best-fitting technique, see Section 5), and foreground extinction of E(B − V) = 0.034 (Schlafly & Finkbeiner 2011). The different metallicities are labeled at the top of each plot, while ages are labeled in the legend. We can see how the most metal-poor isochrone set is too blue to fit the youngest stellar populations, in particular to reproduce the color of the BLs, which instead starts to be compatible with a metallicity of [Fe/H] = −1.5. Also RGB stars are significantly redder than the two most metal-poor isochrone sets shown here, even though they touch its blue edge. We cannot instead exclude such low metallicities for older, fainter stars. To understand how the different stellar populations are distributed within the galaxy, we select age intervals in the CMD using the same MIST isochrones as a guide, and plot the spatial map of the selected stars. The results are shown in Figure 5. We choose to select upper MS stars (in blue), with ages 30 Myr, blue BL stars (in green), with intermediate ages between ∼ 50 and ∼ 500 Myr, RGB stars (in red), older than ∼ 2 Gyr, RC stars (in magenta), older than ∼ 2 Gyr, and HB stars (in orange), older than ∼ 10 Gyr. The top left panel shows these selections in the CMD, while in the other panels we plot the spatial maps of the age-selected stars, as labeled, with contours showing the different regions we selected within the galaxy (see Figure 2). The general trend is that younger stars have more concentrated and clumpy distributions compared to older stars, a behaviour expected as stars move out of their natal structures as they age. In particular, very young MS stars are mainly found in the two inner regions (0a and 0b) which host the very bright SF regions visible in Figures 1 and 2, where the Hii regions are located. We also notice the central holes in these same regions in the RGB, RC, and HB maps, a sign of the strong incompleteness of these fainter stellar populations there. In reality, these faint old stars are likely uniformly distributed over the galaxy. 4. ARTIFICIAL STAR TESTS To properly characterize the photometric errors and incompleteness of the data, we perform artificial star tests (ASTs) on our images using the dedicated DOLPHOT routine. We follow a general standard procedure where we add a fake star (for which we know exactly the input position and magnitudes) to the real images, re-run the photometry, and check whether the source is detected and its output magnitudes. We then repeat the process many times (2 millions in this case) varying the position and magnitudes of the input fake stars, to fully map the whole image and explore the whole range of magnitudes and colors covering the observed CMD. We inject each individual star simultaneously in both F606W and F814W images, in order to reproduce a realistic situation and to account for the error and completeness correlation in the two bands. We consider a star "recovered" in the output catalog if the measured magnitude is within 0.75 mag from its input value in both filters, and satisfy all the selection cuts applied to the real photometry. Adding one fake star at a time guarantees not to artificially alter the crowding of the images. Given the very different crowding conditions within the galaxy, we build the input distribution of stars following the surface brightness of the F606W image, to obtain a more accurate estimate of the incompleteness in the most crowded regions that would be under-sampled with a uniform input distribution. From the output distribution of the recovered stars we derive an estimate of the photometric error (from the m output − m input versus m input distribution, Figure 6) and completeness (from the ratio between the number of output and input stars, Figure 7) as a function of both space and magnitude. Generally speaking, a star is considered recovered if its output flux agrees within 0.75 mag with its input value. We also consider "lost" the stars that do not pass the same quality tests as the real data. The distribution of the photometric error displayed in Figure 6 also allows to take into account the effect of blending of multiple sources on the photometry; in fact, a systematic negative skewness in the output−input flux (stars found brighter than their input) is a signature of overlapping of artificial stars with other ones. We take this skewness into account to consider the blending effect when we create the synthetic CMDs. In Figure 7 we plot the completeness as a function of magnitude for the different regions in which we divided the galaxy (see Figure 2). The very different crowding conditions from inside out are reflected in the different completeness behaviours: indeed, the most internal regions are the most crowded, thus incomplete ones, while the completeness increases as we move outwards. The completeness is 50% at m F606W 25.3 and m F814W 24.75 in the central regions of the galaxy (0a and 0b), and at m F606W 28.7 and m F814W 28 in the outer field (Region 4). STAR FORMATION HISTORY With the photometric catalog and the artificial star catalog in our hands, we can perform detailed studies of the stellar populations and recover the SFH of UGC 4483 using the synthetic CMD method (see, e.g., Tosi et al. 1991;Gallart et al. 2005;Tolstoy et al. 2009;McQuinn et al. 2010;Weisz et al. 2011;Cignoni et al. 2015;Sacchi et al. 2018): the observed CMD is compared to synthetic ones built from a set of stellar evolution models (evolutionary tracks or isochrones) adequately treated to match the distance, extinction, and photometric properties of the galaxy. Synthetic CMDs, each representing a simple stellar population of fixed age and metallicity, are created and used as "basis" functions (BFs), and a linear combination of these BFs creates a composite population which can represent, with the appropriate weights, any SFH. The weight associated with each BF is proportional to the number of stars formed at that age and metallicity, and the best-fit SFH is described by the set of weights producing a composite model CMD most similar to the observed one. The best-fitting weights are determined by using a minimization algorithm to compare data and models. We build our models from both the PARSEC-COLIBRI (Bressan et al. 2012;Marigo et al. 2017) and MIST (Choi et al. 2016;Dotter 2016) isochrone libraries 17 using the following parameters: Kroupa initial mass function (Kroupa 2001) Figures 6 and 7). We also explore different distance/reddening combinations to obtain the best match with the data. We do not impose a metallicity distribution function, but we require that the metallicity at each time bin cannot be more than 25% lower than the metallicity of the adjacent older bin. This is to include a soft constraint without forcing an unknown distribution a priori. This is a looser boundary condition with respect to a metallicity monotonically increasing with time, often imposed in this kind of studies; however, in a galaxy as metal-poor as UGC 4483, nobody knows exactly how the metallicity varies with time, for instance because of the accretion of large amounts of metal poor gas, or because of significant losses of heavy elements, through galactic winds triggered by powerful supernova explosions. We use the hybrid genetic code SFERA (Cignoni et al. 2015) to build the synthetic CMDs and perform the comparison between models and data. SFERA is built to performs a complete implementation of the synthetic CMD method, from the MonteCarlo extraction of synthetic stars from the adopted stellar evolution models, to the inclusion of observational effects in the models through ASTs, and finally, performing the minimization of the residuals between observed and synthetic CMDs, with an accurate estimate of the uncertainties. For the minimization, we bin both the synthetic and observed CMDs to compare star counts in each grid cell. Given that we need to take into account the possible low number counts in some CMD cells, we choose to follow a Poissonian statistics, looking for the combination of synthetic CMDs that minimizes a likelihood distance between model and data: χ P = N bin i=1 obs i ln obs i mod i − obs i + mod i(1) 17 Notice that the PARSEC-COLIBRI models adopt Z = 0.0152, while the MIST models adopt Z = 0.0142. Figure 2) and used to recover the SFH (0a and 0b being the innermost ones, 4 the outermost one, see Figure 2). Over-plotted in colors are the MIST isochrones (Choi et al. 2016;Dotter 2016) with [Fe/H] = −1.5 and ages as labeled, shifted to match the distance and foreground extinction of the galaxy. where mod i and obs i are the model and the data histograms in the i−bin. This likelihood is minimized with the hybrid-genetic algorithm, i.e. a combination of a classical genetic algorithm (Pikaia 18 ) and a local search (Simulated Annealing). Statistical uncertainties are computed with a bootstrap technique on the data, i.e. applying small shifts in color and magnitude to the observed CMD and re-deriving the SFH for each new version. We average the different solutions and take the rms deviation as statistical error on the mean value. Systematic uncertainties are accounted for by re-deriving the SFH with different grids to bin age and CMDs, and using different sets of isochrones. These are added in quadrature to the statistical error. Before discussing the details of the SFH in the various regions, we can have a look at their CMDs which already reveal profound differences within the body of the galaxy. Figure 8 shows the 6 CMDs of these regions (0a and 0b 18 Routine developed at the High Altitude Observatory and publicly available: http://www.hao.ucar.edu/modeling/pikaia/pikaia. php being the innermost ones, 4 the outermost one, see Figure 2), with over-plotted in colors the MIST isochrones with [Fe/H] = −1.5 and ages in the range 10 Myr − 10 Gyr as labeled, shifted to match the distance and foreground extinction of the galaxy. Regions 0a and 0b include the most actively star forming areas of the galaxy, where we see Hα emission from the Hii regions. Their CMDs show very young ( 10 Myr) MS stars, with some BL stars, and very few RGB stars. The observed MS in these two regions is a bit redder with respect to the plotted isochrones, and presumably needs a higher metallicity and/or larger reddening, which is most likely in these highly star-forming regions. As already pointed out in Section 3, the presence of very bright young stars makes the incompleteness severe here, hindering a proper characterization of all the stellar populations present in these regions. This, together with the low number of stars observed, prevents us from running our SFH procedure in a statistically significant way. Going outwards from Region 1 to Region 4, we see a Figure 9. Left panels. Hess diagrams of the CMDs for the different regions (1 to 4 from top row to bottom row) of UGC 4483: the observational diagram is on the left and the one reconstructed on the basis of the MIST models in the middle, while on the right we show the residuals between the two in terms of the likelihood used to compare data and models with SFERA; the black line shows the 50% completeness limit, used as a boundary for the SFH recovery. Right panels. Recovered SFH from the two adopted sets of models (COLIBRI in blue, MIST in orange) with 1σ error bars which include both random and systematic uncertainties. clear evolution of the CMD which becomes progressively older: the MS is less and less populated, while the RGB and RC features start to stand out. In Region 4 we reach a 50% completeness at m F814W ∼ 28. These differences will be reflected in the radial SFH of the galaxy. Region 1 Region 1 includes the area just outside the H ii regions of UGC 4483. Its CMD contains a variety of stellar populations, suggesting a continuous SF from ancient to recent epochs. Figure 9, top row, shows the Hess diagram (i.e. the density of points) of the CMD, with the observational diagram being on the left, the one reconstructed on the basis of the MIST models in the middle, and the residuals between the two on the right; the black line shows an estimate of the 50% completeness limit, used as a boundary for the SFH recovery. The rightmost panel contains the best-fit SFH from the two adopted sets of models (blue for COLIBRI, orange for MIST). As expected from the CMD analysis, we see a prevalence of young SF, even though the low number of stars and severe incompleteness result in huge errors on our rates. The models perfectly reproduce the good separation between the MS and blue BL sequences, and between the red BL and RGB sequences, a great value of these excellent data, and a significant improvement with respect to previous observations. The two sets of models reasonably agree, in particular in the best constrained intermediate-age bins, between ∼ 60 Myr and ∼ 1 Gyr. In both cases, the best-fit SFHs were obtained with an additional internal reddening of E(B − V) = 0.075, i.e. A F606W = 0.22 and A F814W = 0.14. Region 2 The results for Region 2 are shown in the second row of Figure 9. From the Hess diagram we can see a clear shortening of the bright MS and a higher density of older BLs, while the RGB starts to become more populated and more carbon stars and TP-AGB stars appear. This is well reflected in the SFH, whose peak is between ∼ 30 and ∼ 60 Myr ago, with rates that stay high until ∼ 1 Gyr ago, in both solutions. We still see SF in older times bins, but the increasing incompleteness and the worse time resolution make our results at the older epochs less reliable. The internal reddening we find in this region is E(B − V) = 0.075, i.e. A F606W = 0.22 and A F814W = 0.14. Region 3 The decreasing young-to-old SFR trend continues in Region 3, as shown in Figure 9, third row. We find a roughly constant SF, with some ups and downs in particular in the COLIBRI solution. The internal reddening for this region is E(B − V) = 0.025, i.e. A F606W = 0.07 and A F814W = 0.05, which is lower that in the Regions 1 and 2, where the higher recent SF activity creates more dust, thus we can expect a more severe extinction. Interestingly, the two solutions show a significant discrepancy in the last time bin, with the COLIBRI models providing a rate about 3 times higher than the MIST models. This is most likely a systematic difference in the stellar tracks, and in the way that the most uncertain stellar evolution parameters (like overshooting) are treated. It is worth noticing that the COLIBRI tracks also produce a dip in the SFH between ∼ 2 and ∼ 3 Gyr ago that is much less pronounced in the MIST models, maybe balancing that old peak at ages older than 10 Gyr ago. This is why it is important to use different sets of models, to compare their systematics and provide feedback to keep improving our knowledge on stellar evolution. We also stress that this age range is constrained by stars at the edge of the 50% completeness, and should be interpreted with caution. Region 4 Region 4 includes the most external part of UGC 4483, and our most complete data set. The SFH here, shown in the bottom row of Figure 9, is compatible with 0 back to ∼ 30 Myr ago, and its peak is at ages older than ∼ 3 Gyr ago, according to both solutions. We find an internal reddening of E(B − V) = 0.05, i.e. A F606W = 0.14 and A F814W = 0.09. This is lower than in Regions 1 and 2 but counter-intuitively a bit higher than in Region 3; however, the sensitivity of the CMD-fitting procedure to reddening can be degenerate with distance, so these values should be taken as an indication. Moreover, this reddening difference corresponds to a magnitude difference of about 0.05 mag, which gives a distance difference of about 10 pc, consistent with a distance difference along the line-of-sight between the body and the halo of the galaxy. The very deep CMD reaches 50% of completeness at m F814W ∼ 28, thus fainter than where we expect to see the HB of the galaxy (see the isochrones in Figure 4). Despite the systematic differences we just discussed, both sets of models show a consistent SFR at all ages, including those older than 10 Gyr ago, putting an important constraint on the age of this galaxy outside the Local Group. Since we are working at the edge of the data completeness, we tested the SFH recovery forcing the SF to start only 8 Gyr ago, instead of 13.7, to compare the results and check if such a SFH would still be compatible with the observed CMD. The result of this test is shown in Figure 10. Even though the difference with Figure 9 might not seem significant, the recovered CMD, residuals, and output likelihood of this solution are significantly worse than the case shown in Figure 9, bottom row. Not only the faintest part of the RC/HB is not well reproduced, but also the RGB color does not match. This test provides a strong evidence that we indeed need a SFH older than 8 Gyr to fit our data; given the galaxy's low metallicity, we strongly believe that this points to the presence of an old HB, possibly leading to the existence of RRL variable stars in UGC 4483 (see Section 5.5 for a more detailed discussion). If we shorten even more the lookback time at which we start the SF (e.g., 5 Gyr ago), the situation keeps worsening, as we remove from the synthetic CMD not only stars associated with the RC/HB, but also stars with younger ages that are present in the observed CMD. It is worth mentioning that in all regions the bestfit solution is obtained by using a distance modulus DM = 27.45 ± 0.10, which is slightly lower than the one provided by Izotov & Thuan (2002), but consistent with the one provided by Dolphin et al. (2001), i.e. 27.53 ± 0.12. A summary of the derived SFRs and stellar masses formed at various epochs in the different regions of the galaxy is given in Table 1. In particular for Regions 3 and 4, we can notice a significant difference in the epoch of the main SF peak derived with the COLIBRI or the MIST models. As already discussed in Section 5.3, the 50% completeness in Region 3 is at the edge of the HB or the faintest part of the RC phases, thus systematic uncertainties in the stellar evolution models can greatly affect the resulting age and rate determination, depending on how much of the synthetic HB falls within or outside the CMD region covered by the photometry. For Region 4, the situation is better, thanks to the lower crowding; despite the systematics still affecting the fit, Figure 9 (bottom right panel) shows a very good agreement between the two solutions, and the difference in age peak reported in Table 1 (3 − 6 Gyr for the MIST models, versus 10 − 13.7 Gyr for the COLIBRI models) is simply the result of small variations of the SFRs in the last 3 age bins. Searching for RRL variable stars in UGC 4483 As already mentioned, the detection of RRL stars in a galaxy can unambiguously confirm the presence of an HB, and unveil a stellar population at least ∼ 10 Gyr old. RRLs are ∼ 3 mag brighter than coeval turnoff stars, and the typical form of their light variation makes them easily recognized even in very crowded fields and in galaxies where a 10 Gyr population may be buried into the younger stars. To look for these variables in UGC 4483, the observations have been reduced as a time series to provide a full mapping of the classical instability strip of pulsating variables with periods from about half a day to a couple of days. A detailed analysis of the candidate RRLs and theirs light curves will be the subject of a forthcoming paper (Garofalo et al., in preparation), while here we simply mention what is relevant to the present paper. In our analysis, we found six stars with light curves compatible with those of RRL variables. In particular, two of them are in the most external region of the galaxy we analyzed, Region 4, where crowding conditions are less severe and the completeness much more favorable than in more internal regions. Within the uncertainties of the HB modeling, these candidates also have col- ors and magnitudes compatible with being indeed RRLs, as shown in Figure 11. In the left panel, we plot the F606W versus F606W−F814W color-magnitude diagram of UGC 4483 in grey, the six candidates in green, and the MIST (in red) and COLIBRI (in orange) 10 Gyr old isochrones for a metallicity of [Fe/H] = −2.5. We can notice how different the models are, with the MIST HB being more than half a magnitude brighter than the COLIBRI one. At this metallicity, our candidates fall between the two sets of HB. As shown in the right panel, the disagreement between the isochrones starts to disappear at [Fe/H] = −2.0, and most of our candidates would fall slightly brighter and bluer than the HB, but still compatible with it once the uncertainties on both data and models are considered. DISCUSSION The Local Group is an incredible environment, full of interesting processes and galaxies of many different types. However, since there are not known BDCs within it (except for IC 10, sometimes considered as a such), in order to study this class of very actively star-forming, low-metallicity, systems the only possibility is to push our limits beyond its borders. Even though the distance makes observations challenging even for HST, with the photometry reaching only the brightest evolutionary phases, many studies tried to characterize the formation and evolution of these intriguing objects. Table 1 Summary of the derived star formation rates and stellar masses in the different regions of UGC 4483. ( * ) The last column is the ratio between the two previous ones. Among these works, some embarked on the challenge of applying the synthetic CMD method to the observed CMDs, trying to reconstruct the galaxies' SFHs within the reachable lookback time (see, e.g., Aloisi et al. 1999for I Zw 18, Schulte-Ladbeck et al. 2001for I Zw 36, or Annibali et al. 2003for NGC 1705. Despite the obvious differences among the individual SFHs, all these dwarfs show a qualitatively similar behavior, with a strong ongoing SF activity and a moderate and rather continuous star formation at older epochs. Most notably, they all show an RGB, meaning that they contain stars as old as the lookbacktime reached by the photometry. Finding old stars in these extremely metal-poor galaxies is a key information to understand how they compare to other systems and to place them in the general context of galaxy formation and evolution. For this reason, it is crucial to try to reach even older stellar evolutionary features, to finally figure out whether BCDs are really old systems as currently believed. We presented here a detailed analysis of UGC 4483, the closest example of the BCD category, which we studied thanks to deep new data obtained with the exact purpose of reaching stars older than a few Gyr. Our new WFC3/UVIS observations reach more than 4 mag deeper than the TRGB, and allow us to detect and characterize for the first time the population of core He-burning stars with masses 2 M . To take into account the different crowding and SF conditions across the galaxy, we used isophotal contours of the F606W image to divide it into sub-regions, as shown in Figure 2; the resulting SFHs from the MIST solution are shown in Figure 12, where the rates have been normalized to each region's area to facilitate the comparison. As expected for a BCD, the main recent activity is in the central Region 1, which, despite the uncertainties, shows a continuous increase of SF in the most recent time bins, with a rate about 10 times higher than the average. Going outwards, the rate densities generally decrease, together with the ratio of present-to-past average SF (see the last column of Table 1 for the details). Indeed, Regions 3 and 4 had already formed 50% of their total stellar mass around ∼ 5 Gyr ago, while this occurred between 2 and 3 Gyr ago for Regions 1 and 2, confirming that the SF proceeded from outside in. Even if we could not derive the SFHs there, it is clear that this trend still holds when we consider the two innermost regions of the galaxy, 0a and 0b, sites of very bright and active Hii regions. In particular, Region 0a hosts a super star cluster with absolute magnitudes of M V = −8.98 ± 0.24 and M I = −9.06 ± 0.16, corresponding to an age of ∼ 15 Myr and an initial mass of ∼ 10 4 M , as derived by Dolphin et al. (2001). For comparison, this is just slightly fainter than 30 Doradus, the star forming region within the Large Magellanic Cloud. Using the equivalent width of the Hβ emission line, Izotov & Thuan (2002) derived a much younger age of 4 Myr. In the outer part of the galaxy, Region 4, we find the SF peak at ages 3 Gyr, and the rates are consistent with a SFH starting 13.7 Gyr ago. This is the first time that we can put such a strong constraint on a galaxy outside the Local Group, and despite the uncertainties and systematics of the stellar evolution models, we believe this to be a robust result which is supported also by the detection of a number of candidate RR Lyrae stars. The implication is that UGC 4483, and possibly all BCD galaxies, are indeed as old as a Hubble time, and not young systems as previously believed. Indeed, despite the very young recent activity, about 87% of the total stellar mass of the galaxy was formed at ages older than 1 Gyr. The color extension of the identified HB region includes the expected location of the RR Lyrae gap, corresponding to the pulsation instability strip. Follow-up timeseries observations would be very useful to definitely confirm our detection in UGC 4483 (Garofalo et al., in preparation) of candidate pulsating stars of this class (unambiguous tracers of a 10 Gyr old population) and to be able to use the inferred pulsation characteristics (periods, amplitudes, etc.) to trace the properties of the oldest stellar population in UGC 4483. It is interesting to notice that, as far as the SFH is concerned, UGC 4483 is fairly similar to many dIrrs, in spite of being classified a BCD: it shows a rather continuous SF, with no evidence of short, intense bursts, and no long quiescent phases, at least within the limits of our time resolution. This adds to the evidence that on average the SFH is similar in the two classes of star-forming dwarfs, and the somewhat old idea that BCDs are experiencing now much higher SF activities than dIrrs is not supported by their CMDs. We can indeed compare UGC 4483 to the cases of, e.g., NGC 1569(Grocholski et al. 2012 or NGC 4449 (Sacchi et al. 2018), both very active dIrrs. The extremely low metallicity of UGC 4483 makes our result of the SF being active throughout the whole Hubble time even more important in terms of galaxy chemical evolution. Indeed, this system has been making stars, therefore synthesizing heavy elements for all its long life, and yet it is still extremely metal poor today. This implies not only that significant infall of metal poor gas must have occurred, but also that other mechanisms, such as outflows/winds removing heavy elements, are necessary. It is interesting to check how our results compare to the literature, in particular with previously derived SFHs. Based on HST /WFPC2, Dolphin et al. (2001) derived a SFR of 1.3 × 10 −3 M /yr, which is almost a factor of two higher than ours, (7.01 ± 0.44) × 10 −4 M /yr. However, their CMD is not particularly deep (the faintest stars have an I magnitude of 25, with much larger photometric errors), so it is likely that they are biased by the brightest young stars in the upper part of the CMD. On the other hand, our rate is a lower limit as it is based on Regions 1 to 4, thus excluding the most active Regions 0a and 0b which have not enough stars to run a full SFH derivation. Based on the same data, McQuinn et al. (2010) derived a more detailed SFH, which shows a very recent SF peak (within the last 50 Myr), consistent with what we find in Region 1, and a rather continuous activity up to 1 Gyr ago. Their peak rate, however, is about 1.1 × 10 −2 M /yr, while we reach at most 2.5 × 10 −3 M /yr when considering the uncertainties. This can be again the result of the different sampling of the stellar populations, in particular in the most central regions of the galaxy. They also find an old population, which is consistent with the RGB phase revealed by their CMD, and confirmed by our deeper one. Also Weisz et al. (2011) used the same data and code to derive the SFH of this galaxy, obtaining similar results (their average SFR of 1.57 × 10 −3 M /yr). From the Hα luminosity (Gil de Paz et al. 2003;Kennicutt et al. 2008), we can infer the current ( 10 Myr) SFR following the prescription of Murphy et al. (2011), which assumes a Kroupa IMF. We obtain 2.2 × 10 −3 M /yr, compatible within the error bars with our results in the most recent time bin of Region 1, but still higher, as expected as the Hα mainly traces the emission from the two most active Regions 0a and 0b (see Figure 1, right panel). Finally, from the analysis of VLA observations, Lelli et al. (2012b) found that UGC 4483 has a steeply-rising rotation curve, making it the lowest-mass galaxy with a differentially rotating Hi disk. These rotation-velocity gradients are directly related to the dynamical mass surface densities, and signal a strong central concentration of mass, also found in other BCDs, like I Zw 18 (Lelli et al. 2012a), NGC 2537 (Matthews & Uson 2008), and NGC 1705 (Meurer et al. 1998; see also Figure 10 of Lelli et al. 2012b). They also showed that the central mass concentration cannot be explained by the newly formed stars or by the concentration of the Hi, implying that either the progenitors of BCDs are compact, gas-rich dwarfs, or there must be a mechanism (external, such as interactions and mergers, or internal, such as torques from massive star-forming clumps, Elmegreen et al. 2012) leading to a concentration of gas, old stars, and/or dark matter, causing the SF increase. SUMMARY AND CONCLUSIONS Here we summarize the main results of this paper, where we presented new WFC3/UVIS data of the BCD galaxy UGC 4483, and investigated its resolved stellar populations and radial SFH. -The CMD of UGC 4483 is populated by many generations of stars, from young MS and BL stars, to intermediate-age AGB and RGB stars, and older RC and possibly HB stars. In particular, we were able to reach and detect for the first time the population of core He-burning stars with masses 2 M . -The stellar populations in the galaxy have the typical distribution of star-forming irregular galaxies, with the youngest stars being more centrally concentrated and close to the inner star-forming regions, while the distribution becomes more and more uniform as the stars age. -From the SFH recovered in different radial regions of the galaxy, we found a declining trend of the overall SF activity and of the present-to-past SFR ratio going from inside out. In all regions, we found the best fit SFH using a distance modulus of DM = 27.45 ± 0.10, slightly lower than previous estimates. -Using the synthetic CMD method, we determined an average SFR over the whole Hubble time of (7.01 ± 0.44) × 10 −4 M /yr, corresponding to a total astrated stellar mass of (9.60 ± 0.61) × 10 6 M , 87% of which went into stars at epochs earlier than 1 Gyr ago. These are lower limits, as they do not include the two innermost regions of the galaxy, hosting very active H ii regions and a super star cluster almost as luminous as 30 Doradus. -We found strong evidence of a 10 Gyr old population, which might be responsible for the presence of a blue HB, as also suggested by our detection of a number of candidate RRL variable stars in UGC 4483. These data are associated with the HST GO Program 15194 (PI A. Aloisi). Support for this program was provided by NASA through grants from the Space Telescope Science Institute. F.A., M.C., and M.T. acknowledge funding from the INAF Main Stream program SSH 1.05.01.86.28. We thank the anonymous referee for the useful comments and suggestions that helped to improve the paper. REFERENCES Figure 1 . 1Left panel. 3-color composite image of UGC 4483 from our HST /WFC3 observations: blue corresponds to F606W (broad V ), red to F814W (I), while the green channel was obtained using the mean of the two. Hi contours from the VLA-ANGST survey(Ott et al. 2012) corresponding to column densities of N H = 3.0, 2.5, 2.0, 1.5, 1.0, and 0.5 × 10 21 cm −2 have been superimposed to the HST image. The displayed field of view is 1.7 × 2.2 . North is up, East is left. Right panel. F656N image of UGC 4483 with superimposed the same Hi contours. 3 .Figure 2 . 32STELLAR POPULATIONSFigure 1clearly shows the very young stellar population of this BCD galaxy. The bright Hii region at the northern edge of UGC 4483 is evident in both the composite 3-color image in the left panel and in Hα emission in the right panel, which also reveals another active region more to the south. The Hi distribution, shown with the contours overlapped on the images, F606W image with over-plotted the isophotal contours used to divide the galaxy into different regions (0a, 0b, 1, 2, 3, and 4) and to study the radial behaviour of the stellar populations and SFH. Notice that the image is rotated by 90 degrees to the right with respect to the ones inFigure 1. Figure 3 . 3F814W versus F606W−F814W color-magnitude diagram of UGC 4483 corresponding to the field of view covered by our UVIS imaging (after the quality cuts, see Section 2). The main stellar evolutionary phases are indicated (see Section 3). Figure 4 . 4Same CMD of Figure 3 with MIST isochrones (Choi et al. 2016; Dotter 2016) of 4 different metallicities over-plotted: [Fe/H] = −2.9 (top left), [Fe/H] = −2.0 (top right), [Fe/H] = −1.5 (bottom left), and [Fe/H] = −1.0 (bottom right), shifted to match the distance and foreground extinction of the galaxy. We can recognize the HB feature in both the [Fe/H] = −2.9 and the [Fe/H] = −2.0 10 Gyr old isochrones. Figure 5 . 5Top left panel. Selection of different stellar populations in the CMD: in blue, upper MS stars with ages 30 Myr; in green, BL stars with intermediate ages between ∼ 50 and ∼ 500 Myr; in red, RGB stars older than ∼ 2 Gyr; in magenta, RC stars older than ∼ 2 Gyr; in orange, HB stars older than ∼ 10 Gyr. The other panels show the corresponding spatial distribution of the age-selected stars, as labeled, with contours showing the different regions of the galaxy (seeFigure 2). Figure 6 . 6Photometric errors in F606W (top panel) and F814W (bottom panel) from our artificial star tests; the contours indicate the 1σ, 2σ and 3σ levels of the distributions. from 0.1 to 350 M ; 30% binary fraction; [Fe/H] from −2.9 to 1.0 in steps of 0.1; DM = 27.60 (Izotov & Thuan 2002); foreground extinction E(B − V) = 0.034 (Schlafly & Finkbeiner 2011); photometric errors and incompleteness from our ASTs (see Section 4, and Figure 8 . 8CMDs of the regions we identified in the galaxy (see Figure 10 . 10Best-fit solution for Region 4, but with the SFH forced to stop 8 Gyr ago (to be compared withFigure 9, bottom row). Figure 11 . 11F606W versus F606W−F814W color-magnitude diagram of UGC 4483 in grey, the six RRL candidates we found in green, and the MIST (in red) and COLIBRI (in orange) 10 Gyr old isochrones for a metallicity of [Fe/H] = −2.5 (left panel) and [Fe/H] = −2.0 (right panel). The isochrones were shifted to match the distance and foreground extinction of the galaxy. Figure 12 . 12SFR surface densities (SFR/area) as a function of time in different regions of UGC 4483, from the MIST solutions. Figure 7. Completeness in F606W (left panel) and F814W (right panel) from our artificial star tests in the various regions of the galaxy, highlighting the very different crowding conditions from inside out. The dashed horizontal line marks the 50% completeness level.18 20 22 24 26 28 30 mF606W, inp 0% 20% 40% 60% 80% 100% Completeness Region 0a Region 0b Region 1 Region 2 Region 3 Region 4 20 22 24 26 28 30 mF814W, inp Region 0a Region 0b Region 1 Region 2 Region 3 Region 4 . A Aloisi, M Tosi, L Greggio, AJ. 118302Aloisi, A., Tosi, M., & Greggio, L. 1999, AJ, 118, 302 . A Aloisi, R P Van Der Marel, J Mack, ApJ. 63145Aloisi, A., van der Marel, R. P., Mack, J., et al. 2005, ApJ, 631, L45 . A Aloisi, G Clementini, M Tosi, ApJ. 667151Aloisi, A., Clementini, G., Tosi, M., et al. 2007, ApJ, 667, L151 . F Annibali, L Greggio, M Tosi, A Aloisi, C Leitherer, AJ. 1262752Annibali, F., Greggio, L., Tosi, M., Aloisi, A., & Leitherer, C. 2003, AJ, 126, 2752 . F Annibali, M Bellazzini, M Correnti, ApJ. 88319Annibali, F., Bellazzini, M., Correnti, M., et al. 2019, ApJ, 883, 19 . A Bressan, P Marigo, L Girardi, MNRAS. 427127Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127 . E Caffau, H G Ludwig, M Steffen, A&A. 57988Caffau, E., Ludwig, H. G., Steffen, M., et al. 2015, A&A, 579, A88 . J Choi, A Dotter, C Conroy, ApJ. 823102Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102 . M Cignoni, E Sabbi, R P Van Der Marel, ApJ. 81176Cignoni, M., Sabbi, E., van der Marel, R. P., et al. 2015, ApJ, 811, 76 . G Clementini, E V Held, L Baldacci, L Rizzi, ApJ. 58885Clementini, G., Held, E. V., Baldacci, L., & Rizzi, L. 2003, ApJ, 588, L85 . A E Dolphin, PASP. 1121383Dolphin, A. E. 2000, PASP, 112, 1383 . A E Dolphin, L Makarova, I D Karachentsev, MNRAS. 324249Dolphin, A. E., Makarova, L., Karachentsev, I. D., et al. 2001, MNRAS, 324, 249 . A Dotter, ApJS. 2228Dotter, A. 2016, ApJS, 222, 8 . B G Elmegreen, H.-X Zhang, D A Hunter, ApJ. 747105Elmegreen, B. G., Zhang, H.-X., & Hunter, D. A. 2012, ApJ, 747, 105 . C Gallart, M Zoccali, A Aparicio, ARA&A. 43387Gallart, C., Zoccali, M., & Aparicio, A. 2005, ARA&A, 43, 387 . A Gil De Paz, B F Madore, O Pevunova, ApJS. 14729Gil de Paz, A., Madore, B. F., & Pevunova, O. 2003, ApJS, 147, 29 The DrizzlePac Handbook Grocholski, A. Gonzaga, AJ. 143117Gonzaga et al., S. 2012, The DrizzlePac Handbook Grocholski, A. J., van der Marel, R. P., Aloisi, A., et al. 2012, AJ, 143, 117 . Y I Izotov, T X Thuan, ApJ. 567875Izotov, Y. I., & Thuan, T. X. 2002, ApJ, 567, 875 . Robert C Kennicutt, J Lee, J C Funes, J G , ApJS. 178247Kennicutt, Robert C., J., Lee, J. C., Funes, J. G., et al. 2008, ApJS, 178, 247 . P Kroupa, MNRAS. 322231Kroupa, P. 2001, MNRAS, 322, 231 . F Lelli, M Verheijen, F Fraternali, R Sancisi, A&A. 53772Lelli, F., Verheijen, M., Fraternali, F., & Sancisi, R. 2012a, A&A, 537, A72 . F Lelli, M Verheijen, F Fraternali, R Sancisi, A&A. 544145Lelli, F., Verheijen, M., Fraternali, F., & Sancisi, R. 2012b, A&A, 544, A145 . P Marigo, L Girardi, A Bressan, ApJ. 83577Marigo, P., Girardi, L., Bressan, A., et al. 2017, ApJ, 835, 77 . L D Matthews, J M Uson, AJ. 135291Matthews, L. D., & Uson, J. M. 2008, AJ, 135, 291 . K B W Mcquinn, E D Skillman, J M Cannon, ApJ. 721297McQuinn, K. B. W., Skillman, E. D., Cannon, J. M., et al. 2010, ApJ, 721, 297 . K B W Mcquinn, E D Skillman, A Dolphin, ApJ. 812158McQuinn, K. B. W., Skillman, E. D., Dolphin, A., et al. 2015, ApJ, 812, 158 . G R Meurer, L Staveley-Smith, N E B Killeen, MNRAS. 300705Meurer, G. R., Staveley-Smith, L., & Killeen, N. E. B. 1998, MNRAS, 300, 705 . E J Murphy, J J Condon, E Schinnerer, ApJ. 73767Murphy, E. J., Condon, J. J., Schinnerer, E., et al. 2011, ApJ, 737, 67 . M C Odekon, AJ. 1321834Odekon, M. C. 2006, AJ, 132, 1834 . J Ott, A M Stilp, S R Warren, AJ. 144123Ott, J., Stilp, A. M., Warren, S. R., et al. 2012, AJ, 144, 123 . M Rejkuba, L Greggio, W E Harris, G L H Harris, E W Peng, ApJ. 631262Rejkuba, M., Greggio, L., Harris, W. E., Harris, G. L. H., & Peng, E. W. 2005, ApJ, 631, 262 . E Sacchi, M Cignoni, A Aloisi, ApJ. 85763Sacchi, E., Cignoni, M., Aloisi, A., et al. 2018, ApJ, 857, 63 . E F Schlafly, D P Finkbeiner, ApJ. 737103Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103 . R E Schulte-Ladbeck, U Hopp, L Greggio, M M Crone, I O Drozdovsky, AJ. 1213007Schulte-Ladbeck, R. E., Hopp, U., Greggio, L., Crone, M. M., & Drozdovsky, I. O. 2001, AJ, 121, 3007 . T X Thuan, P O Seitzer, ApJ. 231327Thuan, T. X., & Seitzer, P. O. 1979, ApJ, 231, 327 . E Tolstoy, V Hill, M Tosi, ARA&A. 47371Tolstoy, E., Hill, V., & Tosi, M. 2009, ARA&A, 47, 371 . M Tosi, L Greggio, G Marconi, P Focardi, AJ. 102951Tosi, M., Greggio, L., Marconi, G., & Focardi, P. 1991, AJ, 102, 951 . L Van Zee, M P Haynes, ApJ. 636214van Zee, L., & Haynes, M. P. 2006, ApJ, 636, 214 . D R Weisz, J J Dalcanton, B F Williams, ApJ. 7395Weisz, D. R., Dalcanton, J. J., Williams, B. F., et al. 2011, ApJ, 739, 5 . B F Williams, D Lang, J J Dalcanton, ApJS. 2159Williams, B. F., Lang, D., Dalcanton, J. J., et al. 2014, ApJS, 215, 9
[]
[ "Cluster algebras and quantum affine algebras", "Cluster algebras and quantum affine algebras" ]
[ "David Hernandez ", "Bernard Leclerc " ]
[]
[]
Let C be the category of finite-dimensional representations of a quantum affine algebra U q ( g) of simply-laced type. We introduce certain monoidal subcategories C ℓ (ℓ ∈ N) of C and we study their Grothendieck rings using cluster algebras.
10.1215/00127094-2010-040
[ "https://arxiv.org/pdf/0903.1452v3.pdf" ]
16,280,208
0903.1452
94aee3ad95f6973636375f5b078df5b576856613
Cluster algebras and quantum affine algebras 19 Nov 2009 David Hernandez Bernard Leclerc Cluster algebras and quantum affine algebras 19 Nov 2009 Let C be the category of finite-dimensional representations of a quantum affine algebra U q ( g) of simply-laced type. We introduce certain monoidal subcategories C ℓ (ℓ ∈ N) of C and we study their Grothendieck rings using cluster algebras. 1 Introduction 1.1 Let g be a simple Lie algebra of type A n , D n or E n , and let U q ( g) denote the corresponding quantum affine algebra, with parameter q ∈ C * not a root of unity. The monoidal category C of finite-dimensional U q ( g)-modules has been studied by many authors from different perspectives (see e.g. [AK,CP1,FR,GV,KS,N1]). In particular its simple objects have been classified by Chari and Pressley, and Nakajima has calculated their character in terms of the cohomology of certain quiver varieties. In spite of these remarkable results many basic questions remain open, and in particular little is known about the tensor structure of C . When g = sl 2 , Chari and Pressley [CP2] have shown that every simple object is isomorphic to a tensor product of simple objects of a special type called Kirillov-Reshetikhin modules. Conversely, they have shown that a tensor product S 1 ⊗· · ·⊗S k of Kirillov-Reshetikhin modules is simple if and only if S i ⊗ S j is simple for every 1 i < j k. Moreover, S i ⊗ S j is simple if and only if S i and S j are "in general position" (a combinatorial condition on the roots of the Drinfeld polynomials of S i and S j ). Hence, the Kirillov-Reshetikhin modules can be regarded as the prime simple objects of C [CP6], and one knows which products of primes are simple. As an easy corollary, one can see that the tensor powers of any simple object of C are simple. For g = sl 2 , the situation is far more complicated. Thus, already for g = sl 3 , we do not know a general factorization theorem for simple objects (see [CP6], where a tentative list of prime simple objects is conjectured). In fact, it was shown in [L] that the tensor square of a simple object of C is not necessarily simple in general, so one should not expect results similar to the sl 2 case for other Lie algebras g. Because of these difficulties, we decide in this paper to focus on some smaller subcategories. We introduce a sequence C 0 ⊂ C 1 ⊂ · · · ⊂ C ℓ ⊂ · · · , (ℓ ∈ N), of full monoidal subcategories of C , whose objects are characterized by certain strong restrictions on the roots of the Drinfeld polynomials of their composition factors. By construction, the Grothendieck ring R ℓ of C ℓ is a polynomial ring in n(ℓ + 1) variables, where n is the rank of g. Our starting point is that R ℓ is naturally equipped with the structure of a cluster algebra. Recall that cluster algebras were introduced by Fomin and Zelevinsky [FZ1] as a combinatorial device for studying canonical bases and total positivity. They found immediately lots of applications, including a proof of a conjecture of Zamolodchikov concerning certain discrete dynamical systems arising from the thermodynamic Bethe ansatz, called Y -systems [FZ2]. As observed by Kuniba, Nakanishi and Suzuki [KNS], Y -systems are strongly related with the representation theory of U q ( g) via some other systems of functional relations called T -systems. It was conjectured in [KNS] that the characters of the Kirillov-Reshetikhin modules are solutions of a T -system, and this was later proved by Nakajima [N2] in the simply-laced case, and by Hernandez in the general case [H3]. Now it is easy to notice that in the simply-laced case the equations of a T -system are exactly of the same form as the exchange relations in a cluster algebra. This led us to introduce a cluster algebra structure on R ℓ by using an initial seed consisting of a choice of n(ℓ+ 1) Kirillov-Reshetikhin modules in C ℓ . The exchange matrix of this seed encodes nℓ equations of the T -system satisfied by these Kirillov-Reshetikhin modules. (Note that the seed contains n frozen variables -or coefficients -in the sense of [FZ1].) By definition of a cluster algebra, one can obtain new seeds by applying sequences of mutations to the initial seed. Then one of our main conjectures is that all the new cluster variables produced in this way are classes of simple objects of C ℓ . (Note that in general, these simple objects are no longer Kirillov-Reshetikhin modules.) 1.2 For ℓ = 0, the cluster structure of R 0 is trivial: there is a unique cluster consisting entirely of frozen variables. The case ℓ = 1 is already very interesting, and most of this paper will be devoted to it. Recall that Fomin and Zelevinsky have classified the cluster algebras with finitely many cluster variables in terms of finite root systems [FZ3]. It turns out that for every g the ring R 1 has finitely many cluster variables, and that its cluster type coincides with the root system of g. Therefore, one may expect that the tensor structure of the simple objects of the category C 1 can be described in "a finite way". In fact we conjecture that for every g the category C 1 behaves as nicely as the category C for sl 2 , and we prove it for g of type A n and D 4 . More precisely, we single out a finite set of simple objects of C 1 whose Drinfeld polynomials are naturally labeled by the set of almost positive roots of g (i.e. , positive roots and negative simple roots). Recall that the almost positive roots are in one-to-one correspondence with the cluster variables [FZ3], so we shall call these objects the cluster simple objects. To these objects we add n distinguished simple objects which we call frozen simple objects. Our first claim is that the classes of these objects in R 1 coincide with the cluster variables and frozen variables. Recall also that the cluster variables are grouped into overlapping subsets of cardinality n called clusters [FZ1]. The number of clusters is a generalized Catalan number, and they can be identified with the faces of the dual of a generalized associahedron [FZ2]. Our second claim is that a tensor product of cluster simple objects is simple if and only if all the objects belong to a common cluster. Moreover, the tensor product of a frozen simple object with any simple object is again simple. It follows that every simple object of C 1 is a tensor product of cluster simple objects and frozen simple objects. As a consequence, the tensor powers of any simple object of C 1 are simple. To prove this, we show that a tensor product S 1 ⊗ · · · ⊗ S k of simple objects of C 1 is simple if and only if S i ⊗ S j is simple for every i = j. This result is proved uniformly for all types. Note that only the frozen objects and the cluster objects attached to positive simple roots and negative simple roots are Kirillov-Reshetikhin modules. The remaining cluster objects (labelled by the positive non simple roots) probably deserve to be studied more closely. 1.3 When ℓ > 1 the ring R ℓ has in general infinitely many cluster variables, grouped into infinitely many clusters. A notable exception is the case g = sl 2 , for which R ℓ is a cluster algebra of finite type A ℓ in the classification of [FZ3]. In this special case it follows from [CP2] that, again, the classes in R ℓ of the simple objects of C ℓ are precisely the cluster monomials of R ℓ . We conjecture that for arbitrary g and ℓ, every cluster monomial of R ℓ is the class of a simple object. We also conjecture that, conversely, the class of a simple object S in C ℓ is a cluster monomial if and only if S ⊗ S is simple. In this case, following [L], we call S a real simple object. We believe that real simple objects form an interesting class of irreducible U q ( g)-modules, and the meaning of our partial results and conjectures is that their characters are governed by the combinatorics of cluster algebras. 1.4 Let us now describe the contents of the paper in more detail. In Section 2 we recall the definition of a cluster algebra, and we introduce the new notion of monoidal categorification of a cluster algebra (Definition 2.1). We show (Proposition 2.2) that the existence of a monoidal categorification gives an immediate answer to some important open problems in the theory of cluster algebras, like the linear independence of cluster monomials, or the positivity of the coefficients of their expansion with respect to an arbitrary cluster. In Section 3 we briefly review the theory of finite-dimensional representations of U q ( g) and we introduce the categories C ℓ . We also recall the definition of the Kirillov-Reshetikhin modules and we review the T -system of equations that they satisfy. In Section 4 we introduce some simple objects S(α) of C 1 attached to the almost positive roots α, and we formulate our conjecture (Conjecture 4.6) for the category C 1 . It states that C 1 is a monoidal categorification of a cluster algebra A with the same Dynkin type as g, and that the S(α) are the cluster simple objects. We illustrate the conjecture in type A 3 . In Section 5 we review the definition and main properties of the q-characters of Frenkel-Reshetikhin. One of the main tools to calculate them is the Frenkel-Mukhin algorithm which we recall and illustrate with examples. In Section 6, we introduce some truncated versions of the q-characters for C 1 . These new truncated characters are much easier to calculate and they contain all the information to determine the composition factors of an object of C 1 . The main result of this section (Proposition 6.7) is an explicit formula for the truncated q-character of S(α) when α is a multiplicity-free positive root. In Section 7, we review following [FZ2,FZ5] the F-polynomials of the cluster algebra A . These are variants of the Fibonacci polynomials of [FZ2], which are the building blocks of the general solution of a Y -system. They satisfy a functional equation similar to a T -system and each cluster variable can be expressed in terms of its F-polynomial in a simple way (Equation (32)). We show that Conjecture 4.6 (i) is equivalent to the fact that the (normalized) truncated q-characters of the cluster simple objects are equal to the F-polynomials, and we prove it for the multiplicity-free roots (Theorem 7.8). In Section 8, we prove an important tensor product theorem for the category C 1 (Theorem 8.1): if S 1 , . . . , S k are simple objects of C 1 , then S 1 ⊗ · · · ⊗ S k is simple if and only if S i ⊗ S j is simple for every i = j. In Section 9, we introduce following [FZ2] the notions of compatible roots and cluster expansion. Because of Theorem 8.1 and of the existence and uniqueness of a cluster expansion [FZ2], we reduce Conjecture 4.6 (ii) for a given g to a finite check: one has to verify that S(α) ⊗ S(β ) is simple for every pair (α, β ) of compatible roots. In Section 10 and Section 11 we prove Conjecture 4.6 in type A n and D 4 . Conjecture 4.6 (i) is proved more generally in type D n , by showing that the truncated q-characters of cluster simple objects are given by the explicit combinatorial formula of [FZ2] for the Fibonacci polynomials of 2-restricted roots. In Section 12 we present some applications of our results for C 1 . First we show that the qcharacters of the simple objects of C 1 are solutions of a system of functional equations similar to a periodic T -system. Secondly, we explain that the l-weight multiplicities appearing in the truncated q-characters of the cluster simple objects are equal to some tensor product multiplicities. This is reminiscent of the Kostka duality for representations of sl n , but in our case it is not limited to type A. Thirdly, we exploit some known geometric formulas for F-polynomials due to Fu and Keller [FK] to express the coefficients of the truncated q-characters of the simple objects of C 1 as Euler characteristics of some quiver grassmannians. This is similar to the Nakajima character formula for standard modules, but our formula works for simple modules. Finally in Section 13 we state our conjectures for C ℓ (arbitrary ℓ) and illustrate them for g = sl 2 (arbitrary ℓ) where they follow from [CP2], and g = sl 3 (ℓ = 2). We also explain how our conjecture for g = sl r (arbitrary ℓ) would essentially follow from a general conjecture of [GLS2] about the relation between Lusztig's dual canonical and dual semicanonical bases. Kedem [Ke] and Di Francesco [DFK] have studied another connection between quantum affine algebras and cluster algebras, based on other types of functional equations (Q-systems and generalized T -systems). Keller [Kel2] has obtained a proof of the periodicity conjecture for Ysystems attached to pairs of simply-laced Dynkin diagrams using 2-Calabi-Yau categorifications of cluster algebras. More recently, Inoue, Iyama, Kuniba, Nakanishi and Suzuki [IIKNS] have also studied the connection between Y -systems, T -systems, Grothendieck rings of U q ( g) and cluster algebras, motivated by periodicity problems. These papers do not study the relations between cluster monomials and irreducible U q ( g)-modules. 1.5 After this paper was submitted for publication, Nakajima [N5] gave a geometric proof of Conjecture 4.6 for all types A, D, E, using a tensor category of perverse sheaves on quiver varieties. This category also makes sense for non Dynkin quivers and Nakajima showed that its Grothendieck ring has a cluster algebra structure and that all cluster monomials are classes of simple objects. Thanks to Proposition 2.2 below, this yields strong positivity results for every acyclic cluster algebra with a bipartite seed. To the best of our knowledge, our conjecture for ℓ > 1 remains open. We have presented our main results in several seminars and conferences in 2008 and 2009 (IHP Paris (BL), MSRI Berkeley (BL), NTUA Athens (BL), ETH Zurich (DH), UNAM Mexico (DH), Math. Institute Oxford (DH), MIO Oberwolfach (BL)). We thank these institutions for their kind invitations. Special thanks are due to Arun Ram and MSRI for organizing in spring 2008 a program on combinatorial representation theory where a large part of this work was done. We also thank Keller for his preliminary Oberwolfach report on this work [Kel1]. Finally we thank Nakajima for helpful comments and stimulating discussions. 2 Cluster algebras and their monoidal categorifications 2.1 We refer to [FZ4] for an excellent survey on cluster algebras. Here we only recall the main definitions and results. 2.1.1 Let 0 n < r be some fixed integers. If B = (b i j ) is an r × (r − n)-matrix with integer entries, then the principal part B of B is the square matrix obtained from B by deleting the last n rows. Given some k ∈ [1, r − n] define a new r × (r − n)-matrix µ k ( B) = (b ′ i j ) by b ′ i j = −b i j if i = k or j = k, b i j + |b ik |b k j + b ik |b k j | 2 otherwise,(1) where i ∈ [1, r] and j ∈ [1, r − n]. One calls µ k ( B) the mutation of the matrix B in direction k. If B is an integer matrix whose principal part is skew-symmetric, then it is easy to check that µ k ( B) is also an integer matrix with skew-symmetric principal part. We will assume from now on that B has skew-symmetric principal part. In this case, one can equivalently encode B by a quiver Γ with vertex set {1, . . . , r} and with b i j arrows from j to i if b i j > 0 and −b i j arrows from i to j if b i j < 0. Note that Γ has no loop nor 2-cycle. Now Fomin and Zelevinsky define a cluster algebra A ( B) as follows. Let F = Q(x 1 , . . . , x r ) be the field of rational functions in r commuting indeterminates x = (x 1 , . . . , x r ). One calls (x, B) the initial seed of A ( B). For 1 k r − n define x * k = ∏ b ik >0 x b ik i + ∏ b ik <0 x −b ik i x k .(2) The pair (µ k (x), µ k ( B)), where µ k (x) is obtained from x by replacing x k by x * k , is the mutation of the seed (x, B) in direction k. One can iterate this procedure and obtain new seeds by mutating (µ k (x), µ k ( B)) in any direction l ∈ [1, r − n]. Let S denote the set of all seeds obtained from (x, B) by any finite sequence of mutations. Each seed of S consists of an r-tuple of elements of F called a cluster, and of a matrix. The elements of a cluster are its cluster variables. Every seed has r − n neighbours obtained by a single mutation in direction 1 k r − n. One does not mutate the last n elements of a cluster; they are called frozen variables and belong to every cluster. We then define the cluster algebra A ( B) as the subring of F generated by all the cluster variables of the seeds of S . The integer r − n is called the rank of A ( B). A cluster monomial is a monomial in the cluster variables of a single cluster. Note that the exchange relation (2) is of the form x k x * k = m + + m −(3) where m + and m − are two cluster monomials. 2.1.2 The first important result of the theory is that every cluster variable z of A ( B) is a Laurent polynomial in x with coefficients in Z. It is conjectured that the coefficients are positive. Note that because of (2), every cluster variable can be written as a subtraction free rational expression in x, but this is not enough to ensure the positivity of its Laurent expansion. The second main result is the classification of cluster algebras of finite type, i.e. with finitely many different cluster variables. Fomin and Zelevinsky proved that this happens if and only if there exists a seed (z, C) such that the quiver attached to the principal part of C is a Dynkin quiver (that is, an arbitrary orientation of a Dynkin diagram of type A, D, E). In this case, the cluster monomials form a distinguished subset of A ( B), which is conjectured to be a Z-basis [FZ5,§11]. If A ( B) is not of finite cluster type, the cluster monomials do not span it, but it is conjectured that they are linearly independent. It is an interesting open problem to specify a "canonical" Zbasis of A ( B) containing the cluster monomials. 2.2 We now propose a natural framework which would yield positive answers to the above questions. We say that a simple object S of a monoidal category is prime if there exists no non trivial factorization S ∼ = S 1 ⊗ S 2 . We say that S is real if S ⊗ S is simple. z = N(x 1 , . . . , x n ) x d 1 1 · · · x d r r denote its cluster expansion with respect to the cluster x = (x 1 , . . . , x r ). Here the numerator N(x 1 , . . . , x r ) is a polynomial with coefficients in Z. Multiplying both sides by the denominator, we see that N(x 1 , . . . , x r ) is the class of the tensor product P := S(z) ⊗ S(x 1 ) ⊗d 1 ⊗ · · · ⊗ S(x r ) ⊗d r . Moreover, since x is a cluster, every monomial m = x k 1 1 · · · x k r r is the class of a simple object Σ = S(x 1 ) ⊗k 1 ⊗ · · · ⊗ S(x r ) ⊗k r of M . Hence the coefficient of m in N(x 1 , . . . , x r ) is equal to the multiplicity of Σ as a composition factor of P, thus it is nonnegative. This proves (i). By definition of a monoidal categorification, the cluster monomials form a subset of the set of classes of all simple objects of M , which is a Z-basis of the Grothendieck group. This proves (ii). 2 Remark 2.3 (i) In recent years, many examples of categorifications of cluster algebras have been constructed (see e.g. [MRZ,BMRRT,CC,GLS2,BIRS,CK,GLS4]). They are quite different from the monoidal categorifications introduced in this paper. Indeed, these categories are only additive and have no tensor operation. The multiplication of the cluster algebra reflects the direct sum operation of the category. We shall call these categorifications additive. Note that there is no analogue of Proposition 2.2 for additive categorifications. Although additive categorifications have been helpful for proving positivity of cluster expansions or linear independence of cluster monomials in some cases, this always requires some additional work, for example to show the positivity of some Euler characteristics. Finally, to recover the cluster algebra from its additive categorification one does not consider the Grothendieck group (which would be too small) but a kind of "dual Hall algebra" constructed via a cluster character. This is in general a complicated procedure. (ii) In view of the strength and simplicity of Proposition 2.2, one might wonder whether there exist any examples of monoidal categorifications of cluster algebras. One of the aims of this paper is to produce some examples using representations of quantum affine algebras. (iii) Let M be an abelian monoidal category. If M is a monoidal categorification of a cluster algebra A , then we get new combinatorial insights about the tensor structure of M . In particular if A has finite cluster type, we can express any simple object of M as a tensor product of finitely many prime objects, and this yields a combinatorial algorithm to calculate the composition factors of a tensor product of simple objects of M . So this can be a fruitful approach to study certain interesting monoidal categories M . This is the point of view we adopt in this paper. Finite-dimensional representations of U q ( g) In this section we briefly review some known results in the representation theory of quantum affine algebras. For more detailed surveys we refer the reader to the monograph [CP1,chap. 12] and the recent paper [CH]. 3.1 Let g be a simple Lie algebra over C of type A n , D n or E n . We denote by I = [1, n] the set of vertices of the Dynkin diagram, by A = [a i j ] i, j∈I the Cartan matrix of g, by h the Coxeter number, by Π = {α i | i ∈ I} the set of simple roots, by W the Weyl group, with longest element w 0 . Let U q ( g) denote the corresponding quantum affine algebra, with parameter q ∈ C * not a root of unity. U q ( g) has a Drinfeld-Jimbo presentation, which is a q-analogue of the usual presentation of the Kac-Moody algebra g. It also has a second presentation, due to Drinfeld, which is better suited to study finite-dimensional representations. There are infinitely many generators x + i,r , x − i,r , h i,s , k i , k −1 i , c, c −1 , (i ∈ I, r ∈ Z, s ∈ Z \ {0}), and a list of relations which we will not repeat (see e.g. [FR]). Remember that g can be realized as a central extension of the loop algebra g ⊗ C[t, t −1 ]. If x + i , x − i , h i (i ∈ I) denote the Chevalley generators of g then x ± i,r is a q-analogue of x ± i ⊗ t r , h i,s is a q-analogue of h i ⊗ t s , k i stands for the q-exponential of h i ≡ h i ⊗ 1, and c for the q-exponential of the central element. For every a ∈ C * there exists an automorphism τ a of U q ( g) given by τ a (x ± i,r ) = a r x ± i,r , τ a (h i,s ) = a s h i,s , τ a (k ±1 i ) = k ±1 i , τ a (c ±1 ) = c ±1 . There also exists an involutive automorphism σ given by σ (x ± i,r ) = x ∓ i,−r , σ (h i,s ) = −h i,−s , σ (k ±1 i ) = k ∓1 i , σ (c ±1 ) = c ∓1 . 3.2 We consider the category C of finite-dimensional U q ( g)-modules (of type 1). It is easy to see that if V is an object of C , then c acts on V as the identity, and the generators h i,s act by pairwise commuting endomorphisms of V . Since U q ( g) is a Hopf algebra, C is an abelian monoidal category. It is well-known that C is not semisimple. For an object V in C and a ∈ C * , we denote by V (a) the pull-back of V under τ a , and by V σ the pull-back of V under σ . The maps V → V (a) and V → V σ give auto-equivalences of C . 3.3 Let I denote the set of vertices of the Dynkin diagram of g. It was proved by Chari and Pressley that the simple objects S of C are parametrized by I-tuples of polynomials π S = (π i,S (u); i ∈ I) in one indeterminate u with coefficients in C and constant term 1, called the Drinfeld polynomials of S. In particular, for a ∈ C * and i ∈ I we have a fundamental module V i,a which is the simple object with Drinfeld polynomials: where i → i ∨ denotes the involution on I given by w 0 (α i ) = −α i ∨ . The Drinfeld polynomials behave simply under the action of the automorphisms τ a , namely, for a simple object S of C we have π i,S(a) (u) = π i,S (au), (i ∈ I). π j,V i,a (u) = 1 − au if j = i, 1 if j = i. Let They also behave simply under the action of the involution σ [CP3,Prop. 5 π i,S (u) = ∏ k (1 − a k u), then π i ∨ ,S σ (u) = ∏ k (1 − a −1 k q −h u). We also have the following compatibility with tensor products. If S 1 and S 2 are simple, and if S is the simple object with Drinfeld polynomials π i,S := π i,S 1 π i,S 2 (i ∈ I), then S is a subquotient of the tensor product S 1 ⊗ S 2 . 3.5 Let R denote the Grothendieck ring of C . It is known [FR,Cor. 2] that R is the polynomial ring over Z in the classes [V i,a ] (i ∈ I, a ∈ C * ) of the fundamental modules. 3.6 Since the Dynkin diagram of g is a bipartite graph, we have a partition I = I 0 ⊔ I 1 such that every edge connects a vertex of I 0 with a vertex of I 1 . The following notation will be very convenient and used in many places. For i ∈ I we set ξ i = 0 if i ∈ I 0 , 1 if i ∈ I 1 , ε i = (−1) ξ i .(4) Clearly, the map i → ξ i is completely determined by the choice of ξ i 0 ∈ {0, 1} for a single vertex i 0 , hence there are only two possible such maps. 3.7 Let C Z be the full subcategory of C whose objects V satisfy: for every composition factor S of V and every i ∈ I, the roots of the Drinfeld polynomial π i,S (u) belong to q 2Z+ξ i . The Grothendieck ring R Z of C Z is the subring of R generated by the classes [V i,q 2k+ξ i ] (i ∈ I, k ∈ Z). It is known that every simple object S of C can be written as a tensor product S 1 (a 1 ) ⊗ · · · ⊗ S k (a k ) for some simple objects S 1 , . . . , S k of C Z and some complex numbers a 1 , . . . , a k such that a i a j ∈ q 2Z , (1 i < j k). (This follows, for example, from the fact that such a tensor product satisfies the irreducibility criterion in [Cha2].) Therefore, the description of the simple objects of C essentially reduces to the description of the simple objects of C Z . 3.9 The definition of C ℓ depends on the map i → ξ i which can be chosen in two different ways. However, if g is not of type A 2n , the image of C ℓ under the auto-equivalence V → V σ (q h+2ℓ+1 ) is the category defined like C ℓ but with the opposite choice of ξ i . If g is of type A 2n , the image of C ℓ under V → V * (q h ) is the category defined like C ℓ but with the opposite choice of ξ i . This shows that the choice of ξ i is in fact irrelevant. 3.10 For i ∈ I, k ∈ N * and a ∈ C * , the simple object W (i) k,a with Drinfeld polynomials π j,W (i) k,a = (1 − au)(1 − aq 2 u) · · · (1 − aq 2k−2 u) if j = i, 1 if j = i, is called a Kirillov-Reshetikhin module. In particular for k = 1, W 1,a coincides with the fundamental module V i,a . By convention, W (i) 0,a is the trivial representation for every i and a. The classes [W (i) k,a ] in R satisfy the following system of equations indexed by i ∈ I, k ∈ N * , and a ∈ C * , called the T -system: [W (i) k,a ][W (i) k,aq 2 ] = [W (i) k+1,a ][W (i) k−1,aq 2 ] + ∏ j [W ( j) k,aq ],(5) where in the right-hand side, the product runs over all vertices j adjacent to i in the Dynkin diagram. This was conjectured in [KNS] and proved in [N2, H3]. 0 · · · 0 1 [W 1,aq 2k−2 ] .(6) 4 The case ℓ = 1: statement of results and conjectures We now focus on the subcategory C 1 . 4.1 Let Q = ZΠ be the root lattice of g. Let Φ ⊂ Q be the root system. Following [FZ2] we denote by Φ −1 the subset of almost positive roots, that is, Φ −1 = Φ >0 ∪ (−Π) consists of the positive roots together with the negatives of the simple roots. In this section, we attach to each β ∈ Φ −1 a simple object S(β ) of C 1 . 4.1.1 To the negative simple root −α i we attach the module S(−α i ) whose Drinfeld polynomials are all equal to 1, except P i,S(−α i ) which is equal to P i,S(−α i ) (u) = 1 − uq 2 if i ∈ I 0 , 1 − uq if i ∈ I 1 .(7) In other words, S(−α i ) is equal to the fundamental module V i,q 2 if i ∈ I 0 , and to the fundamental module V i,q if i ∈ I 1 . 4.1.2 To the simple root α i we attach the module S(α i ) whose Drinfeld polynomials are all equal to 1, except P i,S(α i ) which is equal to P i,S(α i ) (u) = 1 − u if i ∈ I 0 , 1 − uq 3 if i ∈ I 1 .(8) In other words, S(α i ) is equal to the fundamental module V i,1 if i ∈ I 0 , and to the fundamental module V i,q 3 if i ∈ I 1 . To β = ∑ i∈I b i α i ∈ Φ >0 we attach the module S(γ) whose Drinfeld polynomials are P i,S(β ) = P i,S(α i ) b i , (i ∈ I).(9) If β is not a simple root, this is not a Kirillov-Reshetikhin module. 4.1.4 For i ∈ I, we denote by F i the simple module whose Drinfeld polynomials are all equal to 1, except P i,F i which is equal to P i,F i (u) = (1 − uq 0 )(1 − uq 2 ) if i ∈ I 0 , (1 − uq)(1 − uq 3 ) if i ∈ I 1 .(10) This is a Kirillov-Reshetikhin module. The classes [F i ] will play the rôle of frozen cluster variables in the Grothendieck ring R 1 . 4.2 We define a quiver Γ with 2n vertices as follows. We start from a copy of the Dynkin diagram oriented in such a way that every vertex of I 0 is a source, and every vertex of I 1 is a sink. For every i ∈ I we then add a new vertex i ′ and an arrow i ← i ′ if i ∈ I 0 (resp. i → i ′ if i ∈ I 1 ). Example 4.1 Let g be of type A 3 . We choose I 0 = {1, 3}. The quiver Γ is 1 ← 1 ′ ↓ 2 → 2 ′ ↑ 3 ← 3 ′ Put I ′ = {i ′ | i ∈ I}. It is often convenient to identify I with [1, n], I ′ with [n + 1, 2n], and i ′ with i + n. This will be done freely in the sequel. As explained in 2.1.1 we can attach a matrix B = (b i j ) to Γ. This is a 2n × n-matrix with set of column indices I, and set of row indices I ∪ I ′ , the vertex set of Γ. The entry b i j is equal to 1 if there is an arrow from j to i in Γ, to -1 if there is an arrow from i to j in Γ, and to 0 otherwise. Example 4.2 We continue the previous example. We have B =         0 −1 0 1 0 1 0 −1 0 −1 0 0 0 1 0 0 0 −1         . Let A = A ( B) be the cluster algebra attached as in 2.1.1 to the initial seed (x, B), where x = (x 1 , . . . , x 2n ). This is a cluster algebra of rank n with n frozen variables. By construction, the principal part of B is a skew-symmetric matrix encoded by a Dynkin quiver (with sink-source orientation) of the same Dynkin type as g. As recalled in 2. Φ −1 = {−α 1 , −α 2 , −α 3 , α 1 , α 2 , α 3 , α 1 + α 2 , α 2 + α 3 , α 1 + α 2 + α 3 }. The cluster algebra A is the subring of F = Q(x 1 , x 2 , x 3 , f 1 , f 2 , f 3 ) generated by x[−α 1 ] = x 1 , x[−α 2 ] = x 2 , x[−α 3 ] = x 3 , f 1 , f 2 , f 3 , and the following cluster variables x[α 1 ] = x 2 + f 1 x 1 , x[α 2 ] = x 1 x 3 + f 2 x 2 , x[α 3 ] = x 2 + f 3 x 3 , x[α 1 + α 2 ] = f 2 x 2 + f 1 x 1 x 3 + f 1 f 2 x 1 x 2 , x[α 2 + α 3 ] = f 2 x 2 + f 3 x 1 x 3 + f 2 f 3 x 2 x 3 , x[α 1 + α 2 + α 3 ] = f 2 x 2 2 + f 1 f 3 x 1 x 3 + f 1 f 2 x 2 + f 2 f 3 x 2 + f 1 f 2 f 3 x 1 x 2 x 3 . Lemma 4.4 The cluster algebra A is equal to the polynomial ring in the 2n variables x[−α i ], x[α i ], (i ∈ I). Proof -This follows from [BFZ]. Indeed, A is an acyclic cluster algebra, and x[−α i ], x[α i ] are the generators denoted by x i , x ′ i in [BFZ]. The monomials in x 1 , x ′ 1 , . . . , x n , x ′ n which contain no product of the form x j x ′ j are called standard, and by [BFZ,Cor. 1.21] they form a basis of A over the ring Z[ f i | i ∈ I]. It then follows from the relations f i = x i x ′ i − ∏ j =i x −a i j j that the set of all monomials in x 1 , x ′ 1 , . . . , x n , x ′ n is a basis of A over Z. 2 Example 4. 5 We continue Example 4.3. The generators of A can be expressed as polynomials in x[−α 1 ], x[−α 2 ], x[−α 3 ], x[α 1 ], x[α 2 ], x[α 3 ], as follows: f 1 = x[−α 1 ]x[α 1 ] − x[−α 2 ], f 2 = x[−α 2 ]x[α 2 ] − x[−α 1 ]x[−α 3 ], f 3 = x[−α 3 ]x[α 3 ] − x[−α 2 ], x[α 1 + α 2 ] = x[α 1 ]x[α 2 ] − x[−α 3 ], x[α 2 + α 3 ] = x[α 2 ]x[α 3 ] − x[−α 1 ], x[α 1 + α 2 + α 3 ] = x[α 1 ]x[α 2 ]x[α 3 ] − x[−α 1 ]x[α 1 ] − x[−α 3 ]x[α 3 ] + x[−α 2 ]. 4.4 We have seen in 3.8 that the Grothendieck ring R 1 is the polynomial ring in the classes of the 2n fundamental modules of C 1 : S(−α i ), S(α i ), (i ∈ I). By Lemma 4.4, the assignment x[−α i ] → [S(−α i )], x[α i ] → [S(α i )], (i ∈ I), extends to a ring isomorphism ι from A to R 1 . We can now state the main theorem-conjecture of this section. ι(x[β ]) = [S(β )], ι( f i ) = [F i ], (β ∈ Φ −1 , i ∈ I). (ii) If we identify A to R 1 via ι, C 1 becomes a monoidal categorification of A . Moreover, the class in R 1 of any simple object of C 1 is a cluster monomial (in other words, every simple object of C 1 is real). The conjecture will be proved in Section 10 for g of type A n . In Section 11 we prove it for g of type D 4 , and we prove (i) for g of type D n . Conjecture 4.6 (ii) immediately implies the following Corollary 4.7 (i) The category C 1 has finitely many prime simple objects, namely, the cluster simple objects S(β ) (β ∈ Φ −1 ), and the frozen simple objects F i (i ∈ I). (ii) Each simple object of C 1 is a tensor product of primes whose classes belong to one of the clusters of A . (iii) All the tensor powers of a simple object of C 1 are simple. (iv) The cluster monomials form a Z-basis of A . 2 Example 4. 8 We continue the previous examples. By Corollary 4.7, for g of type A 3 the category C 1 has 12 prime simple objects: S(−α 1 ), S(−α 2 ), S(−α 3 ), S(α 1 ), S(α 2 ), S(α 3 ), S(α 1 + α 2 ), S(α 2 + α 3 ), S(α 1 + α 2 + α 3 ), F 1 , F 2 , F 3 . The first 6 modules are fundamental representations, F 1 , F 2 , F 3 are Kirillov-Reshetikhin modules, S(α 1 + α 2 ), S(α 2 + α 3 ) are minimal affinizations (in the sense of [Cha1]), but S(α 1 + α 2 + α 3 ) is not a minimal affinization. Its underlying U q (g)-module is isomorphic to V (ϖ 1 + ϖ 2 + ϖ 3 ) ⊕ V (ϖ 2 ) . (Here we denote by ϖ i (i ∈ I) the fundamental weights of g, and by V (λ ) the irreducible U q (g)-module with highest weight λ .) Using the known dimensions of the fundamental modules and the formulas of Example 4.5, we can easily calculate their dimensions, namely (in the same order): 4, 6, 4, 4, 6, 4, 20, 20, 70, 10, 20, 10. The cluster algebra A has 14 clusters: {−α 1 , −α 2 , −α 3 }, {α 1 , −α 2 , −α 3 }, {−α 1 , α 2 , −α 3 }, {−α 1 , −α 2 , α 3 }, {α 1 , −α 2 , α 3 }, {−α 1 , α 2 , α 2 + α 3 }, {−α 1 , α 3 , α 2 + α 3 }, {−α 3 , α 2 , α 1 + α 2 }, {−α 3 , α 1 , α 1 + α 2 }, {α 1 + α 2 , α 2 , α 2 + α 3 }, {α 1 , α 3 , α 1 + α 2 + α 3 }, {α 1 , α 1 + α 2 , α 1 + α 2 + α 3 }, {α 3 , α 2 + α 3 , α 1 + α 2 + α 3 }, {α 1 + α 2 , α 2 + α 3 , α 1 + α 2 + α 3 }. Here we have written for short β instead of x[β ] and we have omitted the three frozen variables which belong to every cluster. The simple objects of C 1 are exactly all tensor products of the form S(β 1 ) ⊗k 1 ⊗ S(β 2 ) ⊗k 2 ⊗ S(β 3 ) ⊗k 3 ⊗ F ⊗l 1 1 ⊗ F ⊗l 2 2 ⊗ F ⊗l 3 3 , (k 1 , k 2 , k 3 , l 1 , l 2 , l 3 ) ∈ N 6 , in which {β 1 , β 2 , β 3 } runs over the 14 clusters listed above. Moreover, simple objects multiply (in the Grothendieck ring) as the corresponding cluster monomials in A . For instance, the exchange formula x[−α 2 ]x[α 2 ] = f 2 + x[−α 1 ]x[−α 3 ] shows that the tensor product S(−α 2 ) ⊗ S(α 2 ) has two composition factors isomorphic to F 2 and S(−α 1 ) ⊗ S(−α 3 ). q-characters 5.1 An essential tool for our proof of Conjecture 4.6 in type A and D is the theory of q-characters of Frenkel and Reshetikhin [FR]. We shall recall briefly their definition and main properties. For more details see e.g. [CH]. 5.1.1 Recall from 3.2 that if V is a finite-dimensional U q ( g)-module, the endomorphisms of V which represent the generators h i,m commute with each other. Moreover, by Drinfeld's presentation the k i are pairwise commutative, and they commute with all the h i,m . Hence we can write V as a finite direct sum of common generalized eigenspaces for the simultaneous action of the k i and of the h i,m . These common generalized eigenspaces are called the l-weight-spaces of V . (Here l stands for "loop"). The q-character of V is a Laurent polynomial with positive integer coefficients in some indeterminates Y i,a (i ∈ I, a ∈ C * ), which encodes the decomposition of V as the direct sum of its l-weight-spaces . More precisely, the eigenvalues of the h i,m (m > 0) in an l-weight-space W of V are always of the form q m − q −m m(q − q −1 ) k i ∑ r=1 (a ir ) m − l i ∑ s=1 (b is ) m(11) for some nonzero complex numbers a ir , b is . Moreover, they completely determine the eigenvalues of the h i,m (m < 0) and of the k i on W . We encode this collection of eigenvalues with the Laurent monomial ∏ i∈I k i ∏ r=1 Y i,a ir l i ∏ s=1 Y −1 i,b is ,(12) and the coefficient of this monomial in the q-character of V is the dimension of W [FM,Prop. 2.4]. The collection of eigenvalues (11) is called the l-weight of W . By a slight abuse, we shall often say that the l-weight of W is the monomial (12). Let Y = Z[Y ± i,a ; i ∈ I, a ∈ C * ], and denote by χ q (V ) ∈ Y the q-character of V ∈ C . One shows that χ q (V ) depends only on the class of V in the Grothendieck ring R, and that the induced map χ q : R → Y is an injective ring homomorphism. Example 5.1 Let g = sl 2 . Then I = {1}, and we can drop the index i ∈ I. Hence Y = Z[Y ± a ; a ∈ C * ]. One calculates easily the q-character of the fundamental module W 1,a . It is two-dimensional, and decomposes as a sum of two common eigenspaces: χ q (W 1,a ) = Y a +Y −1 aq 2 .(13)χ q (W 2,a ) = Y a Y aq 2 +Y a Y −1 aq 4 +Y −1 aq 2 Y −1 aq 4 . One can calculate similarly the q-character of every Kirillov-Reshetikhin module for U q ( sl 2 ) (see below Example 5.2). 5.1.2 U q ( g) has a natural subalgebra isomorphic to U q (g), hence every V ∈ C can be regarded as a U q (g)-module by restriction. The l-weight-space decomposition of V is a refinement of its decomposition as a direct sum of U q (g)-weight-spaces. Let P be the weight lattice of g, with basis given by the fundamental weights ϖ i (i ∈ I). We denote by ω the Z-linear map from Y to Z[P] defined by ω ∏ i,a Y u i,a i,a = ∑ i ∑ a u i,a ϖ i .(14) If W is an l-weight-space of V with l-weight the Laurent monomial m ∈ Y , then W is a subspace of the U q (g)-weight-space with weight ω(m). Hence, the image ω(χ q (V )) of the q-character of V is the ordinary character of the underlying U q (g)-module. For i ∈ I and a ∈ C * define A i,a = Y i,aq Y i,aq −1 ∏ j =i Y a i j j,a .(15) (Note that, because of our general assumption that g is simply-laced (see §3.1), for i = j we have a i j ∈ {0, −1}.) Thus ω(A i,a ) = α i , and the A i,a (a ∈ C * ) should be viewed as affine analogues of the simple root α i . Following [FR], we define a partial order on the set M of Laurent monomials in the Y i,a by setting: m m ′ ⇐⇒ m ′ m is a monomial in the A i,a . This is an affine analogue of the usual partial order on P, defined by λ λ ′ if and only if λ ′ − λ is a sum of simple roots α i . Let S be a simple object of C with Drinfeld polynomials π i,S (u) = n i ∏ k=1 (1 − ua (i) k ), (i ∈ I).(16) Then the subset M (S) of M consisting of all the monomials occuring in χ q (S) has a unique maximal element with respect to , which is equal to m S = ∏ i∈I n i ∏ k=1 Y i,a (i) k(17) and has coefficient 1 [FM,Th. 4.1]. This is the highest weight monomial of χ q (S). The onedimensional l-weight-space of S with l-weight m S consists of the highest-weight vectors of S, that is, the l-weight vectors v ∈ S such that x + i,r v = 0 for every i ∈ I and r ∈ Z. A monomial m ∈ M is called dominant if it does not contain negative powers of the variables Y i,a . The highest weight monomial m S of an irreducible q-character χ q (S) is always dominant. We will denote by M + the set of dominant monomials. Conversely, we can associate to any dominant monomial as in (17) a unique set of Drinfeld polynomials given by (16). Hence, we can equivalently parametrize the isoclasses of simple objects of C by M + . In the sequel, the simple module whose q-character has highest weight monomial m ∈ M + will be denoted by L(m). To summarize, we have χ q (L(m)) = m 1 + ∑ p M p , where all the M p are monomials in the variables A −1 i,a . It will sometimes be convenient to renormalize the q-characters and work with χ q (L(m)) := 1 m χ q (L(m)) = 1 + ∑ p M p .(18) This is a polynomial with positive integer coefficients in the variables A −1 i,a . Example 5.2 We continue Example 5.1 and describe the q-characters of all the simple objects of C for g = sl 2 . For a ∈ C * , we have A a = Y aq Y aq −1 . For k ∈ N, put m k,a = Y a Y aq 2 · · ·Y aq 2k−2 . It follows from (6) and (13) that for the Kirillov-Reshetikhin modules we have [FR]: χ q (W k,a ) = m k,a 1 + A −1 aq 2k−1 1 + A −1 aq 2(k−1)−1 1 + · · · 1 + A −1 aq 3 (1 + A −1 aq ) · · · .(19) We call q-segment of origin a and length k the string of complex numbers Σ(k, a) = {a, aq 2 , · · · , aq 2k−2 }. Two q-segments are said to be in special position if one does not contain the other, and their union is a q-segment. Otherwise we say that there are in general position. It is easy to check that every finite multi-set {b 1 , . . . , b s } of elements of C * can be written uniquely as a union of segments Σ(k i , a i ) in such a way that every pair (Σ(k i , a i ), Σ(k j , a j )) is in general position. Then, Chari and Pressley [CP2] have proved that the simple module S with Drinfeld polynomial π S (u) = s ∏ m=1 (1 − ub m ) is isomorphic to the tensor product of Kirillov-Reshetikhin modules i W k i ,a i . Hence χ q (S) can be calculated using (19). An important consequence of the existence and uniqueness of the highest l-weight of a simple module is: Proposition 5.3 [FM] Let V and W be two objects of C . If χ q (V ) and χ q (W ) have the same dominant monomials with the same multiplicities, then χ q (V ) = χ q (W ). Indeed, to express χ q (V ) as a sum of irreducible q-characters, one can use the following simple procedure. Pick a dominant monomial m in χ q (V ) which is maximal with respect to the partial order . Then χ q (V ) − χ q (L(m)) is a polynomial with nonnegative coefficients. If χ q (V ) − χ q (L(m)) = 0, then pick a maximal dominant monomial m ′ in χ q (V ) − χ q (L(m)) and consider χ q (V ) − χ q (L(m)) − χ q (L(m ′ )). And so forth. After a finite number of steps one gets the decomposition of χ q (V ) into irreducible characters using only its subset of dominant monomials. 5.2 We now recall an algorithm due to Frenkel and Mukhin [FM] which attaches to any m ∈ M + a polynomial FM(m), equal to the q-character of L(m) under certain conditions. 5.2.1 We first introduce some notation. Given i ∈ I, we say that m ∈ M is i-dominant if every variable Y i,a (a ∈ C * ) occurs in m with non-negative exponent, and in this case we write m ∈ M i,+ . Using the known irreducible q-characters of U q ( sl 2 ) (see Example 5.2) we define for m ∈ M i,+ a polynomial ϕ i (m) as follows. Let m be the monomial obtained from m by replacing Y j,a by Y a if j = i and by 1 if j = i. Then there is a unique irreducible q-character χ q (m) of U q ( sl 2 ) with highest weight monomial m. Write χ q (m) = m(1 + ∑ p M p ), where the M p are monomials in the variables A −1 a (a ∈ C * ). Then one sets ϕ i (m) := m(1 + ∑ p M p ) where each M p is obtained from the corresponding M p by replacing each variable A −1 a by A −1 i,a . Suppose now that m ∈ M + . We define the subset D m of M as follows. A monomial m ′ belongs to D m if there is a finite sequence (m 0 = m, m 1 , . . . , m t = m ′ ) such that for all r = 1, . . . ,t there exists i ∈ I such that m r−1 ∈ M i,+ and m r is a monomial occuring in ϕ i (m r−1 ). Clearly, D m is countable and every m ′ ∈ D m satisfies m ′ m. We can therefore write D m = {m 0 = m, m 1 , m 2 , . . . } in such a way that if m t m r then t r. Finally, we define inductively some sequences of integers (s(m r )) r≥0 and (s i (m r )) r≥0 (i ∈ I) as follows. The initial condition is s(m 0 ) = 1 and s i (m 0 ) = 0 for all i ∈ I. For t ≥ 1, we set s i (m t ) = ∑ r<t, m r ∈M i,+ (s(m r ) − s i (m r ))[ϕ i (m r ) : m t ], (i ∈ I),(20)s(m t ) = max{s i (m t ) | i ∈ I},(21) where [ϕ i (m r ) : m t ] denotes the coefficient of m t in the polynomial ϕ i (m r ). We then put FM(m) := ∑ r≥0 s(m r )m r .(22) Example 5.4 Take g of type A 2 and m = Y 1,1 Y 2,q 3 . We have ϕ 1 (m) = Y 1,1 Y 2,q 3 +Y −1 1,q 2 Y 2,q Y 2,q 3 , ϕ 2 (m) = Y 1,1 Y 2,q 3 +Y 1,1 Y 1,q 4Y −1 2,q 5 , so we put m 1 = Y −1 1,q 2 Y 2,q Y 2,q 3 and m 2 = Y 1,1 Y 1,q 4 Y −1 2,q 5 . The monomial m 1 is 2-dominant, and we have ϕ 2 (m 1 ) = Y −1 1,q 2 Y 2,q Y 2,q 3 +Y −1 1,q 2 Y 1,q 4 Y 2,q Y −1 2,q 5 +Y 1,q 4Y −1 2,q 3 Y −1 2,q 5 ; similarly, m 2 is 1-dominant and we have ϕ 1 (m 2 ) = Y 1,1 Y 1,q 4Y −1 2,q 5 +Y −1 1,q 2 Y 1,q 4Y 2,q Y −1 2,q 5 +Y 1,1 Y −1 1,q 6 +Y −1 1,q 2 Y −1 1,q 6 Y 2,q . We set m 3 = Y −1 1,q 2 Y 1,q 4 Y 2,q Y −1 2,q 5 , m 4 = Y 1,q 4 Y −1 2,q 3 Y −1 2,q 5 , m 5 = Y 1,1 Y −1 1,q 6 , and m 6 = Y −1 1,q 2 Y −1 1,q 6 Y 2,q . We see that m 3 is neither 1-dominant nor 2-dominant. The monomial m 4 is 1-dominant and ϕ 1 (m 4 ) = Y 1,q 4 Y −1 2,q 3 Y −1 2,q 5 +Y −1 1,q 6 Y −1 2,q 3 ; similarly, m 5 and m 6 are 2-dominant and ϕ 2 (m 5 ) = Y 1,1 Y −1 1,q 6 , ϕ 2 (m 6 ) = Y −1 1,q 2 Y −1 1,q 6 Y 2,q +Y −1 1,q 6 Y −1 2,q 3 . Finally the monomial m 7 = Y −1 1,q 6 Y −1 2,q 3 is neither 1-dominant nor 2-dominant. So D m = {m, m 1 , m 2 , m 3 , m 4 , m 5 , m 6 , m 7 }. It is easy to check that FM(m) is the sum of all the elements of D m with coefficient 1, and that we have FM(m) = ϕ 1 (m) + ϕ 1 (m 2 ) + ϕ 1 (m 4 ) = ϕ 2 (m) + ϕ 2 (m 1 ) + ϕ 2 (m 5 ) + ϕ 2 (m 6 ). 5.2.2 Let m ∈ M + . We say that the simple module L(m) is minuscule if m is the only dominant monomial of χ q (L(m)). (In [N3] these modules are called special.) Theorem 5.5 [FM] If L(m) is minuscule then χ q (L(m)) = FM(m). Moreover, all the fundamental modules are minuscule. It was proved in [N2] that Kirillov-Reshetikhin modules are also minuscule. Unfortunately, there exist simple modules for which the Frenkel-Mukhin algorithm fails, as shown by the next example. For another example in type C 3 see [NN]. Example 5.6 Take g of type A 2 and m = Y 2 1,1 Y 2,q 3 . Clearly, L(m) is a simple object of C 1 , and using Conjecture 4.6 (ii) (which will be proved in Section 10 in type A n ), we have in the notation of 4.1 L(m) ∼ = S(α 1 ) ⊗ S(α 1 + α 2 ) = L(Y 1,1 ) ⊗ L(Y 1,1 Y 2,q 3 ). It is easy to calculate that for g of type A n+1 and a ∈ C * , we have χ q (L(Y 1,a )) = Y 1,a +Y −1 1,aq 2 Y 2,aq +Y −1 2,aq 3 Y 3,aq 2 + · · · +Y −1 n,aq n+1 . Hence, for our example in type A 2 , χ q (L(m)) = χ q (L(Y 1,1 ))χ q (L(Y 1,1 Y 2,q 3 )) contains the monomial Y −1 2,q 3 Y 1,1 Y 2,q 3 = Y 1,1 = mA −1 1,q A −1 2,q 2 with coefficient 1. On the other hand, ϕ 1 (m) = m(1 + 2A −1 1,q + A −2 1,q ) and ϕ 2 (m) = m(1 + A −1 2,q 4 ). Now m 1 := mA −1 1,q = Y 1,1 Y −1 1,q 2 Y 2,q Y 2,q 3 is 2-dominant and ϕ 2 (m 1 ) = m 1 (1 + A −1 2,q 4 + A −1 2,q 2 A −1 2,q 4 ). Thus FM(m) − m − 2m 1 is of the form mP where P ∈ Z[A −1 1,a , A −1 2,a ; a ∈ C * ] contains only monomials divisible by A −2 1,q or A −1 2,q 4 . Hence FM(m) does not contain the monomial mA −1 1,q A −1 2,q 2 , thus FM(m) is not equal to χ q (L(m)). Remark 5.7 (i) When FM(m) = χ q (L(m)), this polynomial can be written for every i ∈ I as a positive sum of polynomials of the form ϕ i (m ′ ) for some m ′ ∈ M i,+ . For m ′ ∈ D m ∩ M i,+ , the integer s(m ′ ) − s i (m ′ ) is then the coefficient of ϕ i (m ′ ) in this sum. (ii) One can slightly generalize [FM,Th. 5.9] as follows: a sufficient condition for having FM(m) = χ q (L(m)) is that FM(m) contains all the dominant monomials of χ q (L(m)). The proof is essentially the same as in [FM,Th. 5.9]. (iii) In [H1], a polynomial F(m) defined by formulas similar to (20), (21), (22), has been defined for any m ∈ M + . The only difference between the definitions of F(m) and FM(m) is the formula giving s(m t ). If L(m) is minuscule, then F(m) = FM(m) = χ q (L(m)). Otherwise F(m) may not be equal to FM(m). The polynomial FM(m) has nonnegative coefficients, may contain several dominant monomials, and it may not belong to the image of the map χ q : R → Y . On the other hand, F(m) belongs to this image, has a unique dominant monomial (equal to m), but may have negative coefficients. This is well illustrated by Example 5.6. In that case L(m) has dimension 24, and we have FM(m) = χ q (L(m)) −Y 1,1 , F(m) = χ q (L(m)) −Y 1,1 −Y −1 1,q 2 Y 2,q −Y −1 2,q 3 , and F(m) contains the monomial Y −1 2,q 3 with coefficient −1. In this paper when we refer to the Frenkel-Mukhin algorithm we will always mean the polynomial FM(m). 5.2.3 Let us note a useful consequence of the Frenkel-Mukhin algorithm. Proposition 5.8 Let m = ∏ r k=1 Y i k ,a k ∈ M + and let mM be a monomial occuring in χ q (L(m)), where M is a monomial in the variables A −1 i,a (i ∈ I, a ∈ C * ). If A −1 j,b occurs in M there exist k ∈ {1, . . . , r} and l ∈ N \ {0} such that b = a k q l . Moreover, there also exist finite sequences ( j 1 = i k , j 2 , . . . , j s = j) ∈ I s , (l 1 = 1 l 2 · · · l s = l) ∈ N s such that a j t , j t+1 = −1 and A −1 j t ,a k q l t occurs in M for every t = 1, . . . , s − 1. Finally, l t is odd if ε j t = ε i k and even otherwise. Proof -If m = Y i,a is the highest weight monomial of a fundamental module, then χ q (L(m)) = FM(m), and the proposition holds by definition of the algorithm FM. In the general case, by 3.4 L(m) is a subquotient of the tensor product ⊗ r k=1 L(Y i k ,a k ), and therefore its q-character is contained in the product of the q-characters of the factors, which proves the proposition. 2 5.2.4 We can now prove Proposition 3.2. Let L(m) and L(m ′ ) be in C ℓ . This means that m and m ′ are monomials in the variables Y i,q ξ i +2k (0 k ℓ). If L(m ′′ ) is a composition factor of L(m) ⊗ L(m ′ ) then m ′′ is a product of monomials of χ q (L(m)) and χ q (L(m ′ )). So we have m ′′ = mm ′ M where M is a monomial in the A −1 i,a . We claim that, for m ′′ to be dominant, the spectral parameters a have to be of the form a = q ξ i +2k+1 with 0 k ℓ − 1. Indeed, by Proposition 5.8 we know that these parameters belong to q ξ i +2N+1 . Let s = max{r | A −1 i,q ξ i +2r+1 occurs in M for some i}. If s ℓ then all variables Y i,q ξ i +2s+2 occuring in m ′′ have a negative exponent, and m ′′ cannot be dominant. Hence a = q ξ i +2k+1 with 0 k ℓ − 1. It follows that m ′′ depends only on the variables Y i,q ξ i +2k (0 k ℓ). Thus L(m ′′ ) is in C ℓ and C ℓ is closed under tensor products. The description of the Grothendieck ring R ℓ immediately follows, and this finishes the proof of Proposition 3.2. 5.2.5 As a consequence of Proposition 5.8, all simple modules in the category C 0 are minuscule. Indeed consider such a module S with highest monomial m. A monomial m ′ occuring in χ q (S) − m contains at least one A −1 j,q r with r ≥ 1 and j ∈ I. As m ∈ Z[Y i,q ξ i ] i∈I , we can conclude as in Section 5.2.4 that m ′ is not dominant. This property also holds for other subcategories equivalent to C 0 obtained by shifting the spectral parameter by a ∈ C * . More explicitly, consider the category of finite-dimensional representations V which satisfy : for every composition factor S of V and every i ∈ I, the Drinfeld polynomial π i,S (u) belongs to Z[(1 − a −1 q −ξ i u)]. Then one proves exactly as above that in this category any simple object is minuscule, and every tensor product of simple objects is simple. 5.3 The next proposition is often helpful to prove that certain monomials belong to the qcharacter of a module. Proposition 5.9 [H2,Prop. 3.1] Let V be an object of C and fix i ∈ I. Then there is a unique decomposition of χ q (V ) as a finite sum χ q (V ) = ∑ m∈M i,+ λ m ϕ i (m), and the λ m are nonnegative integers. Suppose that we know that m ∈ M i,+ occurs in χ q (V ). Assume also that if m ′ ∈ M i,+ \ {m} is such that ϕ i (m ′ ) contains m then m ′ does not occur in χ q (V ). Then the proposition implies that all the monomials of ϕ i (m) occur in χ q (V ). In particular, let m ∈ M + and let mM be a monomial of χ q (L(m)), where M is a monomial in the A −1 j,a ( j ∈ I). If M contains no variable A −1 i,a then mM ∈ M i,+ and ϕ i (mM) is contained in χ q (L(m)). 5.4 We will sometimes need a natural generalization of 5.3 in which the singleton {i} is replaced by an arbitrary subset J of I. To formulate it we first need to imitate the definition of ϕ i (m) and introduce some polynomials ϕ J (m). We say that m ∈ M is J-dominant, and we write m ∈ M J,+ , if m does not contain negative powers of Y j,a ( j ∈ J, a ∈ C * ). For m ∈ M we denote by m the monomial obtained from m by replacing Y ± i,a by 1 if i ∈ J. If m ∈ M J,+ then m can be regarded as a dominant monomial for the subalgebra U q ( g J ) of U q ( g) generated by the Drinfeld generators attached to the vertices of J. Denote by χ q (m) the q-character of the unique irreducible q-character of U q ( g J ) with highest weight monomial m. Write χ q (m) = m(1 + ∑ p M p ), where the M p are monomials in the variables A −1 j,a ( j ∈ J, a ∈ C * ). Then one sets ϕ J (m) := m(1+ ∑ p M p ) where each M p is obtained from the corresponding M p by replacing each variable A −1 j,a by A −1 j,a . In particular, if m ∈ M + then ϕ J (m) is the sum of all the monomials of χ q (L(m)) of the form mM where M is a monomial in the A −1 j,a ( j ∈ J). We can now state Proposition 5.10 [H2,Prop. 3.1] Let V be an object of C and let J be an arbitrary subset of I. Then there is a unique decomposition of χ q (V ) as a finite sum χ q (V ) = ∑ m∈M J,+ λ m ϕ J (m), and the λ m are nonnegative integers. In particular, let m ∈ M + and let mM be a monomial of χ q (L(m)), where M is a monomial in the A −1 i,a (i ∈ I). If M contains no variable A −1 j,a with j ∈ J then mM ∈ M J,+ and ϕ J (mM) is contained in χ q (L(m)). Truncated q-characters The Frenkel-Mukhin algorithm is an important tool because there is no general formula (like the Weyl character formula) for calculating an irreducible q-character of C . Unfortunately, even when the Frenkel-Mukhin algorithm is successful, the full expansion of the irreducible q-character is in general impossible to handle because it contains too many monomials. For example, the q-character of the 5th fundamental representation for g of type E 8 has 6899079264 monomials (counted with their multiplicities) [N4]. However, when dealing with the subcategory C 1 , we can work with certain truncations of the q-characters, as we shall explain in this section. Thus, we will see that in the category C 1 in type D 4 , the q-character χ q (L(Y 1,q 3Y 2 2,1 Y 3,q 3Y 4,q 3 )) which contains 167237 monomials (counted with their multiplicities) can be controlled by its truncated q-character which has only 14 monomials. 6.1 From now on we will work in the subcategory C Z . It follows from Proposition 5.8 that the q-characters of objects of C Z involve only monomials in the variables Y i,q r (i ∈ I, r ∈ Z). To simplify notation, we will henceforth write Y i,r instead of Y i,q r . Similarly, we will write A i,r instead of A i,q r . 6.2 Let V be an object of C 1 . We can write χ q (V ) = ∑ k m k (1 + ∑ p M (k) p ) where the m k are dominant monomials in the variables Y i,ξ i ,Y i,ξ i +2 (i ∈ I), and the M (k) p are certain monomials in the A −1 i,r . The factorization of a monomial m = m k M (k) p is in general not unique. For example, in type A 2 we have Y 1,0 Y 1,2 A −1 1,1 = Y 2,1 . However, if m k M (k) p is such that M (k) p contains a negative power of A i,r for some i ∈ I and some r 3, because of the restriction on the variables Y occuring in m k and of the formula A −1 i,r = Y −1 i,r+1 Y −1 i,r−1 ∏ j: a i j =−1 Y j,r , for any other expression m k M (k) p = m k M (k) p the monomial M (k) p also contains a negative power of A i,r for some i ∈ I and some r 3. We define the truncated q-character of V to be the Laurent polynomial obtained from χ q (V ) by keeping only the monomials M (k) p which do not contain any A −1 i,r with r 3. We denote this truncated polynomial by χ q (V ) 2 . Our motivation for introducing this truncation is the following Proposition 6.1 The map V → χ q (V ) 2 is an injective homomorphism from the Grothendieck ring R 1 of C 1 to Y . Proof -It is clear from the definition that our truncated q-character is additive and multiplicative, hence induces a homomorphism from R 1 to Y . Let us prove injectivity. Let S be a simple object of C 1 , and χ q (S) = m(1 + ∑ k M k ) where the M k are monomials in the A −1 i,r . If M k contains a variable A −1 i,r with r 3 then mM k can not be dominant. Indeed, if s = max{r | A −1 i,r occurs in M k for some i }, then all variables Y i,s+1 occuring in mM k have a negative exponent, and therefore mM k is not dominant. (In the terminology of [FM,§6.1], mM k is a right negative monomial.) Hence, the dominant monomials of χ q (S) are all contained in its truncated version χ q (S) 2 . Thus, if for two objects V and W of C 1 we have χ q (V ) 2 = χ q (W ) 2 then χ q (V ) and χ q (W ) have the same dominant monomials, and the claim follows from Proposition 5.3. 2 Remark 6.2 One might consider a different truncated q-character, obtained by keeping only the dominant monomials. By Proposition 5.3, this truncation is injective on R Z , but it is difficult to use because it is not multiplicative. Example 6.3 We continue Example 5.4 in type A 2 . With our new simplified notation, we have m = Y 1,0 Y 2,3 , and one checks easily that χ q (L(m)) 2 = m + m 1 = Y 1,0 Y 2,3 +Y −1 1,2 Y 2,1 Y 2,3 = Y 1,0 Y 2,3 1 + A −1 1,1 . On the other hand, it follows from Example 5.6 that χ q (L(Y 2 1,0 Y 2,3 )) 2 = χ q (L(Y 1,0 )) 2 χ q (L(Y 1,0 Y 2,3 )) 2 = Y 1,0 1 + A −1 1,1 + A −1 1,1 A −1 2,2 Y 1,0 Y 2,3 1 + A −1 1,1 = Y 2 1,0 Y 2,3 1 + 2A −1 1,1 + A −2 1,1 + A −1 1,1 A −1 2,2 + A −2 1,1 A −1 2,2 . Example 6.4 Let g be of type A, D, E. Then for i ∈ I χ q (L(Y i,2 )) 2 = Y i,2 , χ q (L(Y i,1 )) 2 = Y i,1 1 + A −1 i,2 , χ q (L(Y i,0 )) 2 = Y i,0 1 + A −1 i,1 ∏ j∈I,a i j =−1 (1 + A −1 j,2 ) , χ q (L(Y i,ξ i Y i,ξ i +2 )) 2 = Y i,ξ i Y i,ξ i +2 . Indeed, for all these modules the q-character is given by the Frenkel-Mukhin algorithm (the first three are fundamental modules, and the last one is a Kirillov-Reshetikhin module.) Let us introduce for P, Q ∈ Z[Y ± i,r ; i ∈ I, r ∈ Z] the notation Proof -By the proof of Proposition 6.1, χ q (S) 2 contains all the dominant monomials of χ q (S), and the claim follows from Remark 5.7 (ii). 2 6.3 Let γ = ∑ i∈I c i α i be an element of the root lattice with nonnegative coordinates c i . We denote by J = { j ∈ I | c j = 0} the support of γ, and we assume that J is connected. Let P Q ⇐⇒ Q − P ∈ N[Y ± i,r ; i ∈ I, r ∈ Z].m = ∏ i∈I 0 Y c i i,0 ∏ i∈I 1 Y c i i,3 . This is the highest l-weight of a simple object L(m) of C 1 . The next proposition shows how to calculate the truncated q-character of L(m) in terms of the truncation of ϕ J (m). Let K = {k ∈ I − J | a k j = −1 for some j ∈ J} be the subset of vertices adjacent to J. For k ∈ K denote by j k the unique j ∈ J such that a k j = −1. (The uniqueness of j k follows from the fact that the Dynkin graph has no cycle and J is connected.) Write ϕ J (m) 2 = m 1 + ∑ p M p , where the M p are monomials in A −1 j,1 , A −1 j,2 ( j ∈ J). In fact, by Proposition 5.8, it is easy to see that each M p is of the form M p = ∏ j∈J A −µ j,p j,1+ξ j for some µ j,p ∈ N. Proposition 6.6 We have χ q (L(m)) 2 = m 1 + ∑ p M p ∏ k∈K∩I 1 (1 + A −1 k,2 ) µ j k ,p . Proof -From 5.4, we have that ϕ J (m) χ q (L(m)). If k ∈ K ∩ I 1 then j k ∈ J ∩ I 0 , and if a monomial M p contains A −1 j k ,1 with exponent µ j k ,p > 0, then mM p contains Y µ j k ,p k,1 and no other Y k,r for r = 1. Therefore mM p is k-dominant. Using 5.3 we deduce that ϕ k (mM p ) = M p (1 + A −1 k,2 ) µ j k ,p χ q (L(m)), and this implies that m 1 + ∑ p M p ∏ k∈K∩I 1 (1 + A −1 k,2 ) µ j k ,p χ q (L(m)) 2 .(23) Note that if k ∈ K ∩ I 0 then j k ∈ J ∩ I 1 , and if mM p is k-dominant then it contains Y µ j k ,p k,2 . Hence, in this case ϕ k (mM p ) 2 = mM p does not contribute anything new to the truncated q-character. Let us now show that (23) is an equality. Suppose on the contrary that it is a strict inequality, and let mM be a monomial in χ q (L(m)) 2 which appears in the left-hand side of (23) with a strictly smaller multiplicity. Here M is a monomial in the variables A −1 i,ξ i +1 . Moreover, by definition of ϕ J (m) all the monomials of χ q (L(m)) obtained from m by multiplying by variables A −1 j,ξ j +1 with j ∈ J appear in ϕ J (m) with the same multiplicity. Hence M has to contain at least one variable A −1 k,ξ k +1 with k ∈ J. By Proposition 5.8 we can assume that k ∈ K ∩ I 1 . We will also assume that mM is maximal with these properties. Denote by r the multiplicity of mM in χ q (L(m)), and by v the exponent of A −1 k,2 in M. Since mM belongs to a truncated q-character, M does not contain any variable A −1 i,3 with a ik = −1, hence mM has to contain Y −v k,3 and therefore it can not be k-dominant. Hence, by 5.3, there exist k-dominant monomials m 1 , . . . , m s , in χ q (L(m)) (counted with their multiplicities) such that each ϕ k (m i ) contains mM with multiplicity r i and r 1 + · · · + r s = r. Since m i > mM for every i, the multiplicities of m i in both sides of (23) are equal. Moreover, by our construction, the left-hand side of (23) also contains ∑ i ϕ k (m i ), hence the multiplicity of mM in this left-hand side is at least r, which contradicts our assumption. Thus (23) is an equality. 2 6.4 We now calculate χ q (S(α)) 2 for a multiplicity-free positive root α, i.e. a root of the form α = α J = ∑ j∈J α j for some connected subset J of I. If g is not of type A, we assume that the trivalent node of the Dynkin diagram belongs to I 0 (this is no loss of generality, see 3.9). Proposition 6.7 Let J be a connected subset of I, and α = α J the corresponding multiplicity-free positive root. Let η ∈ {0, 1} I be the characteristic function of J, i.e. η i = 1 if i ∈ J and η i = 0 otherwise. Let m be the highest weight monomial of χ q (S(α)). We have χ q (S(α)) 2 = m ∑ ν∈S ∏ i∈I A −ν i i,1+ξ i , where S denotes the set of all finite sequences ν ∈ {0, 1} I such that ν i η i for every i ∈ I 0 , and for every i ∈ I 1 ν i max 0, −η i + ∑ j :a i j =−1 ν j .(24) Proof -We first prove the result for J = I = {1, . . . , n}. In this case η i ≡ 1 and the condition ν i η i is always satisfied. We use induction on n. For n = 1 we have χ q (S(α)) 2 = Y 1,0 (1 + A −1 1,1 ) if I = {1} = I 0 , and χ q (S(α)) 2 = Y 1,3 if I = I 1 . Assume now that n 2, and let m 1 be a monomial in the truncated q-character of S(α). We can suppose that the vertex labelled 1 is monovalent and adjacent to the vertex labelled 2. Put I ′ = {2, . . . , n}. There are two cases. (a) If 1 ∈ I 1 then m = Y 1,3 Y 2,0 Y 3,3 · · ·. Put m ′ = mY −1 1,3 . As L(m) appears as a subquotient of L(Y 1,3 ) ⊗ L(m ′ ) and χ q (L(Y 1,3 )) 2 χ q (L(m ′ )) 2 = Y 1,3 χ q (L(m ′ )) 2 , we have m 1 = Y 1,3 m ′ 1 where m ′ 1 is a monomial in χ q (L(m ′ )) 2 . By Proposition 6.6, χ q (L(m ′ )) 2 can be calculated from ϕ I ′ (m ′ ) 2 , and by induction on n we may assume that ϕ I ′ (m ′ ) 2 = m ′ ∑ ν∈S ′ ∏ i∈I ′ A −ν i i,1+ξ i , where S ′ is defined like S replacing I by I ′ . By Proposition 6.6, χ q (L(m ′ )) 2 can only differ from ϕ I ′ (m ′ ) 2 by certain summands M p (1 + A −1 1,2 ) µ 2,p so this shows that the exponent ν i of A −1 i,1+ξ i in m 1 /m satisfies the condition (24) for i ∈ I ′ . It remains to check that ν 1 = 0. For this we can use the fact that χ q (L(m)) χ q (L(Y 1,3 Y 2,0 ))χ q (L(m ′ Y −1 2,0 )). Now by Proposition 6.6 and the first formula in Example 6.3 (with the nodes 1 and 2 of the A 2 Dynkin diagram exchanged), we get χ q (L(Y 1,3 Y 2,0 )) 2 = Y 1,3 Y 2,0 (1 + A −1 2,1 ∏ j =1,a j2 =−1 (1 + A −1 j,2 )). This does not contain A −1 1,2 , neither χ q (L(m ′ Y −1 2,0 )) 2 . (b) If 1 ∈ I 0 then 2 ∈ I 1 is not trivalent, so ν 2 1. We can assume that the vertex 3 is adjacent to 2. We thus have m = Y 1,0 Y 2,3 Y 3,0 · · ·. Put m ′ = mY −1 1,0 . Then m 1 = m 0 m ′ 1 where m 0 is a monomial in χ q (L(Y 1,0 )) 2 and m ′ 1 is a monomial in χ q (L(m ′ )) 2 . By Proposition 6.6 the monomial m 0 belongs to {Y 1,0 , Y 1,0 A −1 1,1 , Y 1,0 A −1 1,1 A −1 2,2 }. On the other hand, by Proposition 6.6 again, χ q (L(m ′ )) 2 = ϕ I ′ (m ′ ) 2 in this case, which is known by induction. If m 0 = Y 1,0 A −1 1,1 A −1 2,2 we see that the exponent ν i of A −1 i,1+ξ i in m 1 /m satisfies condition (24) for every i ∈ I. If m 0 = Y 1,0 A −1 1,1 A −1 2,2 condition (24) would be violated if m ′ 1 /m ′ had ν 3 = 0. But in this case m 0 m ′ 1 could not be a monomial of χ q (L(m)) 2 . Indeed let us prove that if A −1 2,2 appears, then A −1 3,1 must appear. We have the inequality χ q (L(m)) 2 χ q (L(Y 1,0 Y 2,3 Y 3,0 )) 2 χ q (L(m ′ Y −1 2,3 Y −1 3,0 )) 2 . For type A 3 , again by using the first formula in Example 6.3 and Proposition 6.6, we have χ q (L(Y 2,3 Y 3,0 )) 2 = Y 2,3 Y 3,0 (1 + A −1 3,1 ) , χ q (L(Y 1,0 Y 2,3 )) 2 = Y 1,0 Y 2,3 (1 + A −1 1,1 ). Then we have the two inequalities χ q (L(Y 1,0 Y 2,3 Y 3,0 )) 2 χ q (L(Y 1,0 )) 2 χ q (L(Y 2,3 Y 3,0 )) 2 and χ q (L(Y 1,0 Y 2,3 Y 3,0 )) χ q (L(Y 1,0 Y 2,3 )) 2 χ q (Y 3,0 )) 2 which follow as above from a subquotient argument. As a consequence we get χ q (L(Y 1,0 Y 2,3 Y 3,0 )) 2 Y 1,0 Y 2,3 Y 3,0 1 + A −1 1,1 + A −1 3,1 + A −1 1,1 A −1 3,1 (1 + A −1 2,2 ) .(25) Now we get the result by Proposition 6.6. Finally, the general case of a root α = α J for a subinterval J of I follows from the case J = I. Indeed what we have already proved gives us ϕ J (m) 2 , and Proposition 6.6 then yields the value of χ q (S(α)) 2 . 2 Corollary 6.8 Assume that the trivalent node is in I 0 if g is of type D or E. Let α J be a multiplicityfree root. Then χ q (S(α J )) is given by the Frenkel-Mukhin algorithm. Proof -Let m be the highest l-weight of S(α). It follows easily from the explicit formula of Proposition 6.7 that FM(m) χ q (S(α)) 2 . The only point which needs to be checked reduces to type A 3 for m = Y 1,0 Y 2,3 Y 3,0 . By (25), it suffices to check that m(1 + A −1 1,1 + A −1 3,1 + A −1 1,1 A −1 3,1 (1 + A −1 2,2 )) FM(m). We have clearly m(1 + A −1 1,1 + A −1 3,1 + A −1 1,1 A −1 3,1 ) FM(m). Then ϕ 2 (mA −1 1,1 A −1 3,1 ) = ϕ 2 (Y 2 2,1 Y 2,3 ) = ϕ 2 (Y 2,1 )ϕ 2 (Y 2,1 Y 2,3 ) where mA −1 1,1 A −1 3,1 A −1 2,2 appears. Now the claim follows from Corollary 6.5. 2 If g is of type A, all its positive roots are multiplicity-free, thus Proposition 6.7 and Example 6.4 give explicit formulae for all the truncated q-characters χ q (S(α)) 2 in that case. Example 6.9 We take g of type A 3 . We assume that I 0 = {1, 3} and I 1 = {2}. The truncated q-characters of the modules S(α) (α ∈ Φ −1 ) are χ q (S(−α 1 )) 2 = Y 1,2 , χ q (S(α 1 )) 2 = Y 1,0 (1 + A −1 1,1 + A −1 1,1 A −1 2,2 ), χ q (S(−α 2 )) 2 = Y 2,1 (1 + A −1 2,2 ), χ q (S(α 2 )) 2 = Y 2,3 , χ q (S(−α 3 )) 2 = Y 3,2 , χ q (S(α 3 )) 2 = Y 3,0 (1 + A −1 3,1 + A −1 2,2 A −1 3,1 ), χ q (S(α 1 + α 2 )) 2 = Y 1,0 Y 2,3 (1 + A −1 1,1 ), χ q (S(α 2 + α 3 )) 2 = Y 2,3 Y 3,0 (1 + A −1 3,1 ), χ q (S(α 1 + α 2 + α 3 )) 2 = Y 1,0 Y 2,3 Y 3,0 (1 + A −1 1,1 + A −1 3,1 + A −1 1,1 A −1 3,1 + A −1 1,1 A −1 2,2 A −1 3,1 ). Moreover, by Example 6.4, the frozen simple objects F i have the following truncated q-characters χ q (F 1 ) 2 = Y 1,0 Y 1,2 , χ q (F 2 ) 2 = Y 2,1 Y 2,3 , χ q (F 3 ) 2 = Y 3,0 Y 3,2 . Corollary 6.10 Let α J be a multiplicity-free root. Then S(α J ) is a prime simple object. Proof -Let m be the highest l-weight of S(α J ). We have m = ∏ i∈J∩I 0 Y i,0 ∏ i∈J∩I 1 Y i,3 . Suppose that S(α J ) is not prime. Then S(α J ) ∼ = L(m 1 ) ⊗ · · · ⊗ L(m k ),(26) where m 1 · · · m k = m. Clearly, there must exist i ∈ J ∩ I 0 and j ∈ J ∩ I 1 with a i j = −1 and such that Y i,0 and Y j,3 do not occur in the same monomial m r (1 r k). Let m s be the monomial containing Y i,0 . Then χ q (L(m s )) contains m s A −1 i,1 A −1 j,2 by Proposition 6.6, hence mA −1 i,1 A −1 j,2 occurs in χ q (L(m 1 ) ⊗ · · · ⊗ L(m k )). On the other hand, it follows from Proposition 6.7 that mA −1 i,1 A −1 j,2 is not a monomial of χ q (S(α J )), which contradicts (26). 2 F-polynomials In [FZ5], Fomin and Zelevinsky have shown that the cluster variables of A have a nice expression in terms of certain polynomials called the F-polynomials, which are closely related to the Fibonacci polynomials of [FZ2]. Moreover, [FZ2] gives some explicit formulae for the Fibonacci polynomials in type A and D. We will recall these results in a form suitable to our present purpose. 7.1 In Section 4, we have used x = {x[−α i ] | i ∈ I} as our reference cluster. It will be convenient here to work with a slightly different cluster denoted by z = {z i | i ∈ I}, and given by z i = x[−α i ] if i ∈ I 0 , x[α i ] if i ∈ I 1 . It is easy to see that one passes from x to z by applying the sequence of mutations ∏ k∈I 1 µ k (it does not matter in which order since they pairwise commute). A straightforward calculation shows that the exchange matrix B z = [b z i j ] at z is given by b z i j =              ε j a i j if i, j ∈ I and i = j, −1 if j ∈ I and i = j + n ∈ I ′ , −a k j if j ∈ I 0 , and i = k + n ∈ I ′ with k = j, 0 otherwise. Here the column set is indexed by I and the row set is indexed by I ∪ I ′ ≡ [1, n] ∪ [n + 1, 2n]. Following [FZ5,§6] we define the following elements of F : y j = ∏ i∈I f b z i+n, j i , y j = y j ∏ i∈I z b z i j i .(27) Example 7.1 We take g of type A 3 and I 0 = {1, 3}. We have B z =         0 1 0 −1 0 −1 0 1 0 −1 0 0 1 −1 1 0 0 −1         . We have y 1 = f −1 1 f 2 , y 2 = f −1 2 , y 3 = f 2 f −1 3 , y 1 = z −1 2 f −1 1 f 2 , y 2 = z 1 z 3 f −1 2 , y 3 = z −1 2 f 2 f −1 3 . Following [FZ5,§10] we denote by E the linear automorphism of the root lattice Q given by E(α i ) = −ε i α i , (i ∈ I). We also define piecewise-linear involutions τ ε (ε = ±1) of Q by [τ ε (γ) : α i ] =    −[γ : α i ] − ∑ j =i a i j max(0, [γ : α j ]) if ε i = ε, [γ : α i ] if ε i = ε.(28) Here, for γ ∈ Q, we denote by [γ : α i ] the coefficient of α i in the expansion of γ on the basis of simple roots. It is easy to see that τ ε preserves Φ −1 . For α ∈ Φ −1 , one then defines the g-vector g(α) = Eτ − (α).(29) The involution τ − relates the natural labellings of the cluster variables with respect to x and z. We shall write z[α] = x[τ − (α)]. In particular, since τ − (−α i ) = −ε i α i , we have z i = z[−α i ] (i ∈ I). Consider the multiplicative group P of all Laurent monomials in the variables f i (i ∈ I). As in [FZ5,Def. 2.2], we introduce the addition ⊕ given by ∏ i f a i i ⊕ ∏ i f b i i = ∏ i f min(a i ,b i ) i . Endowed with this operation and its ordinary multiplication and division, P becomes a semifield, called the tropical semifield. If F(t 1 , . . . ,t n ) is a subtraction-free rational expression with integer coefficients in some variables t i , then we can evaluate it in P by specializing the t i to some elements p i of P. This will be denoted by F| P (p 1 , . . . , p n ). 7.3 To define the F-polynomials, we need to introduce a variant of A called the cluster algebra with principal coefficients. We shall denote it by A pr . It is given by the initial seed (u, B pr ), where u = (u 1 , . . . , u n , v 1 , . . . , v n ), and B pr is the 2n × n matrix with the same principal part as B z and with lower part equal to the n × n identity matrix. Thus (v 1 , . . . , v n ) are the frozen variables of A pr . By [FZ3] every cluster variable of A pr is of the form u[α] = N α (u 1 , . . . , u n , v 1 , . . . , v n ) u a 1 1 · · · u a n n(30) for some α = ∑ i∈I a i α i ∈ Φ −1 . Here N α is a polynomial, and N −α i ≡ 1 (i ∈ I). Following [FZ5,§3], we can now define the F-polynomials by specializing all the u i to 1: F α (v 1 , . . . , v n ) = N α (1, . . . , 1, v 1 , . . . , v n ), (α ∈ Φ −1 ).(31) For example, for the simple root α i we have F α i (v 1 , . . . , v n ) = 1 + v i . It is known that F α is a polynomial with positive integer coefficients [FZ5,Cor. 11.7]. The main formula is then [FZ5,Cor. 6.3]: z[α] = F α ( y 1 , . . . , y n ) F α | P (y 1 , . . . , y n ) z g(α) ,(32) where, if g(α) = (g 1 , . . . , g n ), we write for short z g(α) = z g 1 1 · · · z g n n . This means that the cluster variable z[α] is completely determined by the corresponding F-polynomial F α and g-vector g(α). 7.4 Recall from §4.4 the ring isomorphism ι : A → R 1 . Note that by comparing the relation x[α i ]x[−α i ] = f i + ∏ j:a i j =−1 x[−α j ] with the T -system equation (5) Taking into account Proposition 6.1, we can regard ι as an isomorphism from A to the subring of Y generated by the truncated q-characters of objects of C 1 . We then have (cf. Example 6.4) ι(z i ) = Y i,ξ i +2 , ι( f i ) = Y i,ξ i Y i,ξ i +2 , (i ∈ I).(33) Moreover, extending ι to a homomorphism from F to the fraction field of Y , we can consider the elements ι( y j ). Lemma 7.2 For j ∈ I, we have ι( y j ) = A −1 j,ξ j +1 . Proof -If j ∈ I 1 , we have ι( y j ) = ι( f j ) −1 ∏ i = j ι(z i ) −a i j = Y −1 j,ξ j Y −1 j,ξ j +2 ∏ i = j Y −a i j i,ξ i +2 = A −1 j,ξ j +1 , since ξ i + 2 = ξ j + 1 if j ∈ I 1 and i ∈ I 0 . On the other hand, if j ∈ I 0 , we have ι( y j ) = ι( f j ) −1 ∏ i = j ι z i f i a i j = Y −1 j,ξ j Y −1 j,ξ j +2 ∏ i = j Y −a i j i,ξ i Y −a i j i,ξ i +2 Y a i j i,ξ i +2 = Y −1 j,ξ j Y −1 j,ξ j +2 ∏ i = j Y −a i j i,ξ i = A −1 j,ξ j +1 , since ξ i = ξ j + 1 if j ∈ I 0 and i ∈ I 1 . 2 Lemma 7.3 Let α ∈ Φ −1 and set β = τ − α = ∑ i b i α i . We have ι z g(α) F α | P (y 1 , . . . , y n ) =    ∏ i∈I 0 Y b i i,0 ∏ i∈I 1 Y b i i,3 if β > 0, Y i,2−ξ i if β = −α i . Proof -Write α = ∑ i a i α i . If α = −α i then F α = 1. Moreover, β = −ε i α i , g(α) = α i , so ι z g(α) F α | P (y 1 , . . . , y n ) = ι(z i ) = Y i,ξ i +2 , which proves the formula in this case. Otherwise if α > 0, by [FZ5,Cor. 10.10] the polynomial F α has a unique monomial of maximal degree, which is divisible by all the other occuring monomials and has coefficient 1. This monomial is m = ∏ i v a i i . When we evaluate m at v i = y i = ∏ k∈I f b z k+n,i k we obtain ∏ i∈I f −a i i ∏ i∈I 1 f − ∑ j =i a j a i j i = ∏ i∈I 0 f −a i i ∏ i∈I 1 f −a i −∑ j =i a j a i j i = ∏ i∈I 0 f −b i i ∏ i∈I 1 f b i i . Moreover F α has constant term 1. Therefore, if β > 0, then b i > 0 for every i ∈ I and F α | P (y 1 , . . . , y n ) = ∏ i∈I 0 f −b i i . Otherwise, if β = −α i with i ∈ I 1 we have F α | P (y 1 , . . . , y n ) = f −1 i (the case β = −α i with i ∈ I 0 has already been dealt with). On the other hand, g(α) = E(β ) = − ∑ i∈I b i ε i α i . Hence, if β > 0 then ι z g(α) F α | P (y 1 , . . . , y n ) = ∏ i∈I 0 ι( f i ) b i ∏ i∈I 0 ι(z i ) −b i ∏ i∈I 1 ι(z i ) b i = ∏ i∈I 0 Y b i i,0 ∏ i∈I 1 Y b i i,3 . If β = −α i and i ∈ I 1 , we have ι z g(α) F α | P (y 1 , . . . , y n ) = ι( f i )ι(z −1 i ) = Y i,1 . 2 Corollary 7.4 Let β ∈ Φ −1 and set α = τ − β . Let Y β denote the highest weight monomial of χ q (S(β )). We have ι(x[β ]) = Y β F α A −1 1,ξ 1 +1 , . . . , A −1 n,ξ n +1 . Proof -This follows immediately from Lemma 7.2, Lemma 7.3, and the relation x[β ] = z[τ − β ]. 2 Using the notation of (18), we see that the proof of Conjecture 4.6 (i) is now reduced to establishing the following polynomial identity in N[A −1 i,ξ i +1 ]: χ q (S(β )) 2 = F τ − (β ) , (β ∈ Φ >0 ),(34) that is, the normalized truncated q-character of S(β ) should coincide with the F-polynomial attached to the root τ − (β ). 7.5 We now recall an explicit formula of Fomin and Zelevinsky for the F-polynomials. This is obtained by combining [FZ2,Prop. 2.10] with [FZ5,Th. 11.6]. It covers all the F-polynomials in type A and D. Let α = ∑ i a i α i ∈ Φ >0 . Assume that a i 2 for every i ∈ I. Let γ = ∑ i c i α i . We say that γ is α-acceptable if (i) 0 c i a i for every i ∈ I; (ii) if i ∈ I 1 and j ∈ I 0 are adjacent then c i (2 − a j ) + c j ; (iii) there is no simple path (i 0 , . . . , i m ) of length m 1 contained in the support of α such that a i 0 = a i m = 1 and for k = 0, . . . , m c i k = 1 if i k ∈ I 1 , a i k − 1 if i k ∈ I 0 . In condition (iii) above, by a simple path we mean any path in the Dynkin diagram whose vertices are pairwise distinct. Finally, let e(γ, α) be the number of connected components of the set {i ∈ I | c i = 1 if i ∈ I 1 and c i = a i − 1 if i ∈ I 0 } that are contained in {i ∈ I | a i = 2}. Then Proposition 7.5 [FZ2, FZ5] F α (v 1 , . . . , v n ) = ∑ γ 2 e(γ,α) v γ ,(35) where the sum is over all α-acceptable γ ∈ Q, and we write v γ = ∏ i∈I v c i i . Example 7.6 Take g of type D 4 and choose I 0 = {2}, where 2 labels the trivalent node. Let α = α 1 + 2α 2 + α 3 + α 4 be the highest root. Then F α = 1 + 2v 2 + v 2 2 + v 1 v 2 + v 2 v 3 + v 2 v 4 + v 1 v 2 2 + v 2 2 v 3 + v 2 2 v 4 + v 1 v 2 2 v 3 + v 1 v 2 2 v 4 + v 2 2 v 3 v 4 + v 1 v 2 2 v 3 v 4 . 7.5.2 If α is a multiplicity-free root, that is, if a i 1 for all roots, (35) simplifies greatly. Indeed, in the definition of an α-acceptable vector γ condition (ii) is a consequence of condition (i), and condition (iii) reduces to (iv) for every i ∈ I 1 , c i min{c j | j ∈ supp(α) and j adjacent to i}. Example 7.7 Take g of type A 3 and choose I 0 = {1, 3}. Then F α 1 = 1 + v 1 , F α 2 = 1 + v 2 , F α 3 = 1 + v 3 , F α 1 +α 2 = 1 + v 1 + v 1 v 2 , F α 2 +α 3 = 1 + v 3 + v 2 v 3 , F α 1 +α 2 +α 3 = 1 + v 1 + v 3 + v 1 v 3 + v 1 v 2 v 3 . 7.6 We can now prove the following particular case of Conjecture 4.6 (i) valid for all Lie types (see Eq. (34)). We assume (as we may, cf. 3.9) that the trivalent node is in I 0 if g is of type D or E. Theorem 7.8 Let β be a multiplicity-free positive root. Then χ q (S(β )) 2 = F τ − (β ) . Proof -Suppose first that β = α i with i ∈ I 1 . Then χ q (S(β )) 2 = Y i,3 , and τ − (β ) = −α i so that F τ − (β ) = 1. Hence the claim is verified in this case. So we can assume that β = α J is the multiplicity-free root supported on a connected subset J of I which is not reduced to a single element of I 1 . Set J ′ = { j ∈ J | j ∈ I 1 and a i j = −1 for some i ∈ I \ J}, K ′ = {i ∈ I \ J | i ∈ I 1 and a i j = −1 for some j ∈ J}, and define K = (J \ J ′ ) ⊔ K ′ . It follows immediately from the definition of τ − that the multiplicityfree root α K supported on K is equal to τ − (α J ). Let us now compare the sequences ν ∈ S of Proposition 6.7 for α J , with the α K -acceptable vectors γ = ∑ i c i α i of 7.5.2. First we note that if i ∈ K ∪ J = K ′ ⊔ (J \ J ′ ) ⊔ J ′ then both ν i and c i are equal to 0 (this is a straightforward consequence of the definitions of S and of an α K -acceptable vector). For j ∈ J ∩ I 0 = K ∩ I 0 the only condition satisfied by ν j and c j is that they should belong to {0, 1}. If j ∈ (J \J ′ )∩I 1 then j must have two neighbours j ′ and j ′′ in J ∩I 0 , and the condition on v j is 0 ν j max{0, ν j ′ + ν j ′′ − 1}, while the condition on c j is 0 c j min{c j ′ , c j ′′ }. Clearly, since c j ′ and c j ′′ belong to {0, 1}, the second condition can be rewritten 0 c j max{0, c j ′ + c j ′′ − 1}. If j ∈ K ′ then j has a unique neighbour j ′ ∈ J, and the condition on ν j is 0 ν j ν j ′ while the condition on c j is 0 c j c j ′ . Finally, if j ∈ J ′ then j has a unique neighbour j ′ ∈ J, and the condition on ν j is 0 ν j max{0, −1 + v j ′ } while the condition on c j is c j = 0. Clearly the condition on ν j forces ν j = 0. In conclusion, the exponents ν and the vectors γ must satisfy the same conditions, and the theorem follows from Proposition 6.7 and 7.5.2. 2 8 A tensor product theorem 8.1 In a cluster algebra A , a product y 1 · · · y k of cluster variables is a cluster monomial if and only if for every 1 i < j k the product y i y j is a cluster monomial. In this section we show the following theorem, which is consistent with our categorification Conjecture 4.6 (ii), and will be needed to prove it. Theorem 8.1 Let S 1 , . . . , S k , be simple objects of C 1 . Suppose that S i ⊗ S j is simple for every 1 i < j k. Then S 1 ⊗ · · · ⊗ S k is simple. Note that we may have S i ∼ = S j for some i < j, in which case our assumption includes the simplicity of S ⊗2 i . The proof will be given in 8.5. A similar result for a special class of modules of the Yangian of gl n attached to skew Young diagrams was given by Nazarov and Tarasov [NT]. 8.2 We need to recall some results about tensor products of objects of C . Let ∆ be the comultiplication of U q ( g). In general one does not know explicit formulae for calculating ∆ in terms of the Drinfeld generators. However, the next proposition contains partial information which will be sufficient for our purposes. We have a natural grading of U q ( g) by the root lattice Q of g given by deg(x ± i,r ) = ±α i , deg(h i,s ) = deg(k ± i ) = deg(c ± ) = 0, (i ∈ I, r ∈ Z, s ∈ Z \ {0}). Let U + (resp. U − ) be the subalgebra of U q ( g) consisting of elements of positive (resp. negative) Q-degree. Proposition 8.2 [D,Prop. 7.1] For i ∈ I and r > 0 we have ∆(h i,r ) − h i,r ⊗ 1 − 1 ⊗ h i,r ∈ U − ⊗U + . The next result is about tensor products of fundamental modules. For a fundamental module L(Y i,r ) = V i,q r in C Z we denote by u i,r a highest weight vector (it is unique up to rescaling by a non-zero complex number). Theorem 8.3 [Cha2, K, VV] Let r 1 , . . . , r s be integers with r 1 · · · r s . Then, for any i 1 , . . . , i s , the tensor product of fundamental modules L(Y i s ,r s ) ⊗ · · · ⊗ L(Y i 1 ,r 1 ) is a cyclic module generated by the tensor product of highest weight vectors u i s ,r s ⊗ · · · ⊗ u i 1 ,r 1 . Moreover, there is a unique homomorphism φ : L(Y i s ,r s ) ⊗ · · · ⊗ L(Y i 1 ,r 1 ) → L(Y i 1 ,r 1 ) ⊗ · · · ⊗ L(Y i s ,r s ) with φ (u i s ,r s ⊗ · · · ⊗ u i 1 ,r 1 ) = u i 1 ,r 1 ⊗ · · · ⊗ u i s ,r s , and its image is the simple module L(Y i 1 ,r 1 · · ·Y i s ,r s ). Example 8.4 (cf. Example 5.1.) Take g = sl 2 and consider the tensor product L(Y 0 ) ⊗ L(Y 2 ). We have L(Y 0 ) = Cu 0 ⊕ Cv 0 , L(Y 2 ) = Cu 2 ⊕ Cv 2 , where u 0 and u 2 are the highest weight vectors, and v 0 = x − 0 u 0 , v 2 = x − 0 u 2 . Each of these four vectors spans a one-dimensional U q (sl 2 )-weight-space, hence also an l-weight-space, and we have h r u 0 = [r] q r u 0 , h r u 2 = [r] q r q 2r u 2 , h r v 0 = − [r] q r q 2r v 0 , h r v 2 = − [r] q r q 4r v 2 , where [r] q = (q r − q −r )/(q − q −1 ). By Theorem 8.3, the submodule of L(Y 0 ) ⊗ L(Y 2 ) generated by u 0 ⊗ u 2 is the three-dimensional simple module L(Y 0 Y 2 ), with basis u 0 ⊗ u 2 , x − 0 (u 0 ⊗ u 2 ) = u 0 ⊗ v 2 + qv 0 ⊗ u 2 , v 0 ⊗ v 2 . The U q (sl 2 )-weight-spaces of L(Y 0 Y 2 ) are also one-dimensional and coincide with its l-weightspaces. By Proposition 8.2 we have h r (x − 0 (u 0 ⊗ u 2 )) = [r] q r (1 − q 4r )u 0 ⊗ v 2 + λ v 0 ⊗ u 2 for some λ ∈ C. Hence u 0 ⊗ v 2 + qv 0 ⊗ u 2 has l-weight Y 0 Y −1 4 , which is the product of the l-weights of u 0 and v 2 . On the other hand, again by Proposition 8.2, we have h r (v 0 ⊗ u 2 ) = 0 for every r > 0, hence v 0 ⊗ u 2 is an l-weight-vector with l-weight 1. This shows that u 0 ⊗ v 2 is not an l-weight vector. 8.3 To an object V of C 1 which is generated by a highest weight vector v, we attach as in 6.2 a truncated below q-character χ q (V ) 3 as follows. We have χ q (V ) = m(1 + ∑ p M (p) ), where m is the l-weight of v, and the M (p) are monomials in the A −1 i,r (i ∈ I, r ∈ Z >0 ). Define χ q (V ) 3 := m 1 + ∑ p * M (p) where ∑ * means the sum restricted to the M (p) which have no variable A −1 i,1 , A −1 i,2 (i ∈ I). We shall also denote by V 3 the subspace of V obtained by taking the direct sum of the corresponding l-weight-spaces. Let S be a simple object in C 1 and let m = ∏ i∈I Y a i i,ξ i Y b i i,ξ i +2 be its highest weight monomial. We define m − = ∏ i∈I Y a i i,ξ i , S − = L(m − ), m + = ∏ i∈I Y b i i,ξ i +2 , S + = L(m + ). Then S − is a simple object of the subcategory C 0 , and therefore S − ∼ = i∈I L(Y i,ξ i ) ⊗a i is a tensor product of fundamental modules (cf. Example 3.3). Similarly, S + ∼ = i∈I L(Y i,ξ i +2 ) ⊗b i . Applying Theorem 8.3, we get a surjective homomorphism φ S : S + ⊗ S − ։ S ⊂ S − ⊗ S + . Lemma 8.5 Let u − be a highest weight vector of S − . We have φ −1 S (S 3 ) = S + ⊗ u − and φ S restricts to a bijection from S + ⊗ u − to S 3 . Proof -First we have χ q (S + ⊗ S − ) 3 = χ q (S + ) 3 χ q (S − ) 3 . Then by Proposition 5.8 we have χ q (S + ) 3 = χ q (S + ) and χ q (S − ) 3 = m − . So χ q (S + ⊗ S − ) 3 = χ q (S + )m − . Then we note that χ q (S) 3 = m − χ q (S + ). Indeed, the inequality χ q (S) 3 m − χ q (S + ) follows from the factorization m = m − m + . On the other hand, S + is minuscule by § 5.2.5, so χ q (S + ) = FM(m + ). By using inductively 5.4 for χ q (S) we get the converse inequality. Hence we get that χ q (S + ⊗ S − ) 3 = χ q (S) 3 . Since φ S is a homomorphism of U q ( g)-modules, it preserves l-weight-spaces, hence it follows that it restricts to a vector space isomorphism from (S + ⊗ S − ) 3 to S 3 . Now since u − is a highest-weight vector of S − , Proposition 8.2 shows that for every l-weight vector v ∈ S + , v ⊗ u − is an l-weight vector of S + ⊗ S − with l-weight equal to the product of the l-weight of v by m − . This implies that S + ⊗ u − ⊂ (S + ⊗ S − ) 3 . Hence, since these two vector spaces have the same q-character, they are equal and the lemma is proved. 2 8.5 We can now give the proof of Theorem 8.1. For i = 1, . . . , k, define m ± i , S ± i , u ± i , φ S i as above. First consider a pair 1 i < j k. Note that by Example 3.3 and Section 5.2.5, S ± i ⊗ S ± j is minuscule and simple. Now, since S i ⊗ S j is simple, and S + j ⊗ S + i ⊗ S − i ⊗ S − j can be written as a product of fundamental modules with non-increasing spectral parameters, by Theorem 8.3, we have a surjective homomorphism φ : S + j ⊗ S + i ⊗ S − i ⊗ S − j ։ S i ⊗ S j , which is unique up to a scalar multiple. Using three times Theorem 8.3, this homomorphism, composed with the inclusion S i ⊗ S j ⊂ S − i ⊗ S + i ⊗ S − j ⊗ S + j , can be factored as follows S + j ⊗ (S + i ⊗ S − i ) ⊗ S − j → (S + j ⊗ S − i ) ⊗ S + i ⊗ S − j → S − i ⊗ (S + j ⊗ S + i ) ⊗ S − j ∼ = ∼ = S − i ⊗ S + i ⊗ (S + j ⊗ S − j ) → S − i ⊗ S + i ⊗ S − j ⊗ S + j . The map from the second module to the fourth one can be written as α ⊗ id S − j , where α : S + j ⊗ S − i ⊗ S + i → S − i ⊗ S + i ⊗ S + j restricts to a homomorphism α from S + j ⊗ S i to S i ⊗ S + j . Since φ is surjective, by applying Lemma 8.5 to S = S j , we get that im(α)⊗ u − j contains S i ⊗ S + j ⊗ u − j , hence α is surjective. To sum- marize, we have obtained the existence of a unique homomorphism S + j ⊗ S i → S i ⊗ S + j mapping u + j ⊗ u i to u i ⊗ u + j , which is surjective. Now we proceed by induction on k 2 to prove Theorem 8.1. For k = 2 there is nothing to prove, so take k 3 and assume that S 1 ⊗ · · · ⊗ S k−1 is simple. By Theorem 8.3 we obtain a surjective homomorphism S + k ⊗ (S + 1 ⊗ · · · ⊗ S + k−1 ) ⊗ (S − 1 ⊗ · · · ⊗ S − k−1 ) ⊗ S − k ։ S + k ⊗ (S 1 ⊗ · · · ⊗ S k−1 ) ⊗ S − k . Using the above result, we then have a sequence of surjective homomorphisms S + k ⊗ (S 1 ⊗ · · · ⊗ S k−1 ) ։ S 1 ⊗ S + k ⊗ S 2 ⊗ · · · ⊗ S k−1 ։ · · · ։ (S 1 ⊗ · · · ⊗ S k−1 ) ⊗ S + k , hence S + k ⊗ (S + 1 ⊗ · · · ⊗ S + k−1 ) ⊗ (S − 1 ⊗ · · · ⊗ S − k−1 ) ⊗ S − k ։ (S 1 ⊗ · · · ⊗ S k−1 ) ⊗ S + k ⊗ S − k ։ S 1 ⊗ · · · ⊗ S k . Thus we have shown that V := S 1 ⊗ · · · ⊗ S k is a quotient of a tensor product of fundamental modules, which by Theorem 8.3 is a cyclic module generated by its highest weight vector. Hence V has no proper submodule containing its highest weight-space. We can now conclude, as in [CP2,§4.10], by considering the dual module V * = S * k ⊗ · · · ⊗ S * 1 . By our assumption, S * j ⊗ S * i ∼ = (S i ⊗ S j ) * is simple for every 1 i < j k. Moreover, by 3.4, the modules S * i belong to a category defined like C 1 except for a shift of all spectral parameters by q −h . Thus, by using the same proof, V * also has no proper submodule containing its highest weight-space. But if W was a proper submodule of V not containing its highest weight-space, then the annihilator W • of W in V * would be a proper submodule of V * containing its lowest weightspace. Let λ i (resp. µ i ) be the lowest (resp. highest) weight of S * i considered as a U q (g)-module. Then λ = λ 1 + · · · + λ n (resp. µ = µ 1 + · · · + µ n ) is the lowest (resp. highest) weight of V * . In the direct sum decomposition of V * as a U q (g)-module, there is a unique simple module L with lowest weight λ , and by our assumption, L is contained in W • . By [CP4,Proposition 5.1 (b)] (see also [FM,Theorem 1.3 (3)]), µ is the highest weight of L, and therefore W • must also contain the highest weight-space of V * , which is impossible. This finishes the proof of Theorem 8.1. 2 9 Cluster expansions 9.1 Following [FZ2,FZ3], we say that two roots α, β ∈ Φ −1 are compatible if the cluster variables x[α] and x[β ] belong to a common cluster of A . Let γ be an element of the root lattice Q. A cluster expansion of γ is a way to express γ as γ = ∑ α∈Φ −1 n α α,(36) where all n α are nonnegative integers, and n α n β = 0 whenever α and β are not compatible. In plain words, a cluster expansion is an expansion into a sum of pairwise compatible roots in Φ −1 . Theorem 9.1 [FZ2,Th. 3.11] Every element of the root lattice has a unique cluster expansion. For γ = ∑ i c i α i ∈ Q, define Y γ := ∏ c i <0 Y |c i | i,2−ξ i ∏ c i >0 Y c i i,ξ i −ε i +1 ∈ M + . This definition is such that for α ∈ Φ −1 , the monomial Y α is the highest l-weight of S(α). Let S be a simple object in C 1 . Its highest l-weight is of the form m = ∏ i∈I Y a i i,ξ i Y b i i,ξ i +2 for some nonnegative integers a i , b i . Clearly we can write m as m = ∏ i∈I (Y i,ξ i Y i,ξ i +2 ) min(a i ,b i ) Y γ , for some unique γ ∈ Q. Hence using Theorem 9.1 and (36), we have a unique factorization m = ∏ i∈I (Y i,ξ i Y i,ξ i +2 ) min(a i ,b i ) ∏ α∈Φ −1 (Y α ) n α , where the α for which n α > 0 are pairwise compatible. Suppose that Conjecture 4.6 (i) is established. Since S(α) = L(Y α ) and F i = L(Y i,ξ i Y i,ξ i +2 ), to prove Conjecture 4.6 (ii) we then need to prove that L(m) ∼ = i∈I F ⊗ min(a i ,b i ) i α∈Φ −1 S(α) ⊗n α . Given that the highest l-weights of the two sides coincide, this amounts to prove that the tensor product of the right-hand side is a simple module. Because of Theorem 8.1, this will be the case if and only if every pair of factors has a simple tensor product. Thus, assuming Conjecture 4.6 (i), the proof of Conjecture 4.6 (ii) reduces to check that (i) F i ⊗ S(α) and F i ⊗ F j are simple for every α ∈ Φ −1 and i, j ∈ I; (ii) if α, β ∈ Φ −1 are compatible, then S(α) ⊗ S(β ) is simple; (iii) for every α ∈ Φ >0 , S(α) is prime. Note that the simple objects F i and S(−α i ) are clearly prime. Indeed, if F i was not prime we could only have F i ∼ = S(α i ) ⊗ S(−α i ), in contradiction with the T -system [S(α i ) ⊗ S(−α i )] = [F i ] + ∏ j :a i j =−1 [S(−α j )]. Type A In this section we assume that g is of type A n . The vertices of the Dynkin diagram are labelled by the interval I = [1, n] in linear order. 10.1 10.2 We will now prove 9.2 (i) and (ii). Note that 9.2 (iii) follows from Corollary 6.10. 10.2.1 We first dispose of (i), which is easy. Indeed, by Example 6.4, χ q (F i ⊗ F j ) 2 is equal to a single dominant monomial hence F i ⊗ F j is simple. Next, we use the fact that χ q (S(α)) 2 is given by the Frenkel-Mukhin algorithm (see Example 6.4 and Corollary 6.8). Thus, if m is the highest l-weight of S(α), we have FM(Y i,ξ i Y i,ξ i +2 m) 2 = Y i,ξ i Y i,ξ i +2 FM(m) 2 = Y i,ξ i Y i,ξ i +2 χ q (L(m)) 2 . Therefore χ q (L(Y i,ξ i Y i,ξ i +2 m)) contains χ q (F i ⊗ S(α)) 2 , and F i ⊗ S(α) is simple. 10.2.2 To describe explicitly the pairs of compatible roots we are going to use the geometric model of [FZ2,§3.5]. The set Φ −1 has cardinality n(n + 1)/2 + n = n(n + 3)/2. We identify the elements of Φ −1 with the diagonals of a regular convex (n + 3)-gon P n+3 as follows. Let 1, . . . , n + 3 be the vertices of P n+3 , labelled counterclockwise. The negative simple roots are identified with the following diagonals: −α k ≡ [i, n + 3 − i] if k = 2i − 1, [i + 1, n + 3 − i] if k = 2i. These diagonals form a "snake", as shown in Figure 1. To identify the remaining diagonals (not belonging to the snake) with positive roots, we associate each α [i, j] with the unique diagonal that crosses the diagonals −α i , −α i+1 , . . . , −α j and does not cross any other diagonal −α k of the snake. Under this identification the roots α and β are compatible if and only if they are represented by two non-crossing diagonals. Hence the clusters are in one-to-one correspondence with the triangulations of P n+3 . Example 10.1 Take n = 3. The identification of the negative simple roots with the snake diagonals of a hexagon is shown in Figure 1. The positive roots are represented by the remaining diagonals as follows: α 1 ≡ [2, 6], α 2 ≡ [1, 4], α 3 ≡ [3, 5], α 1 + α 2 ≡ [4, 6], α 2 + α 3 ≡ [1, 3], α 1 + α 2 + α 3 ≡ [3, 6]. The 14 clusters listed in Example 4.8 correspond to the 14 triangulations of the hexagon. 10.2.3 It follows from the geometric model that all pairs of compatible roots (α, β ) of Φ −1 are of one of the following forms: j,k] (two intervals with a common end); j,l] with i < j k < l and k − j even (two overlapping intervals); j,k] with i < j < k < l and k − j odd (one interval strictly contained into the other). (a) α = −α i , β = −α j ; (b) α = −α i , β ∈ Φ >0 with [β : α i ] = 0;(c) α = α [i, j] , β = α [k,l] with k > j + 1 (two disjoint intervals); (d) α = α [i, j] , β = α [i,l] or α = α [i,k] , β = α [(e) α = α [i,k] , β = α [(f) α = α [i,l] , β = α [ Let us prove that S(α) ⊗ S(β ) is simple in all these cases. (a) This follows from Example 3.3. Indeed, S(−α i ) and S(−α j ) are fundamental modules in a shift of the subcategory C 0 . (b) Write β = α J . Let m = Y i,ξ i +1 ∏ j∈J∩I 0 Y j,0 ∏ j∈J∩I 1 Y j,3 be the highest weight monomial of χ q (S(−α i ) ⊗ S(β )). Since by assumption i ∈ J, it is easy to check that FM(m) contains the product χ q (L(Y i,ξ i +1 )) 2 χ q (S(β )) 2 = χ q (S(−α i ) ⊗ S(β )) 2 given by Example 6.4 and Proposition 6.7. Hence, by 3.4, it contains in particular χ q (L(m)) 2 . By Corollary 6.5, it follows that FM(m) = χ q (L(m)). Thus, χ q (S(−α i ) ⊗ S(β )) = χ q (L(m)), and S(−α i ) ⊗ S(β ) is irreducible. (c) Same reasoning as (b). (d) We assume that i j l and argue by induction on s = l − i. If s = 0, we have to check that S(α i ) ⊗ S(α i ) is simple. This is clear, since if we choose ξ i = 1 then χ q (S(α i )) 2 = Y i,3 and its square contains a single dominant monomial. Suppose the claim is proved when l − i = s, and take l − i = s + 1. Let m [i, j] and m [i,l] denote the highest weight monomials of S(α [i, j] ) and S(α [i,l] ). Choose again ξ i = 1. Then, by Proposition 6.7, the truncated q-characters of S(α [i, j] ) and S(α [i,l] ) do not contain any A −1 k,ξ k +1 with k i. So we have χ q (S(α [i, j] )) 2 = ϕ [i+1, j+1] (m [i, j] ) 2 , χ q (S(α [i,l] )) 2 = ϕ [i+1,l+1] (m [i,l] ) 2 . Hence χ q (S(α [i, j] ) ⊗ S(α [i,l] )) 2 = ϕ [i+1,l+1] (m [i, j] m [i,l] ) 2 . So by using the induction hypothesis and 5.4, we get χ q (S(α [i, j] ) ⊗ S(α [i,l] )) 2 χ q (L(m [i, j] m [i,l] )) 2 . This shows that S(α [i, j] ) ⊗ S(α [i,l] ) ∼ = L(m [i, j] m [i,l] ) is simple. The case of S(α [i, j] ) ⊗ S(α [k, j] ) with 1 k j is similar. (e) (f) Let m = m [i,k] m [ j,l] = m [i,l] m [ j,k] . We want to show that χ q (L(m)) 2 = χ q (L(m [i,k] )) 2 χ q (L(m [ j,l] )) 2 if k − j is odd, χ q (L(m [i,l] )) 2 χ q (L(m [ j,k] )) 2 if k − j is even. To do this, it is enough to show that all the monomials in the right-hand side occur in the lefthand side with the same multiplicity. We argue by induction on l − i 2. For l − i = 2 we are necessarily in the first case with j = k = i + 1 = l − 1. Let us choose ξ i = 1. Then ξ l = 1, and by Proposition 6.6 χ q (L(m [i,k] )) 2 = ϕ [i,l] (m [i,k] ) 2 , χ q (L(m [ j,l] )) 2 = ϕ [i,l] (m [i,k] ) 2 . Thus we can assume for simplicity of notation that I = [i, l] = [1, 3]. Writing for short v m = A −1 m,ξ m +1 , we have χ q (L(m [1,2] )) 2 χ q (L(m [2,3] )) 2 = m(1 + v 2 + v 2 v 3 )(1 + v 2 + v 1 v 2 ). If a monomial in the right-hand side does not contain v 1 , then it belongs to ϕ [2,3] (m [1,2] ) 2 ϕ [2,3] (m [2,3] ) 2 = ϕ [2,3] (m) 2 χ q (L(m)) (the equality follows from (d)). For the same reason, if a monomial does not contain v 3 , it belongs to χ q (L(m)). The only monomial in the right-hand side which contains both v 1 and v 3 is m ′ = mv 1 v 2 2 v 3 . Now m ′′ := mv 2 2 v 3 = Y 2 1,1 Y 1,3 Y −1 2,2 Y 3,1 belongs to χ q (L(m)) by what we have just said, and it is 1-dominant. We have ϕ 1 (m ′′ ) 2 = m ′′ + m ′ , thus, by 5.3, m ′ also belongs to χ q (L(m)). This proves (37) when l − i = 2. Assume now that (37) holds when l − i = s and take l − i = s + 1. Choose ξ i = 1. Using again Proposition 6.6, we can also assume without loss of generality that I = [i, l] = [1, s+ 1]. Arguing as in the case l − i = 2 and using (c) or (d) or the induction hypothesis, we see that every monomial m ′ occuring in the right-hand side of (37) occurs in χ q (L(m)) with the same multiplicity if m ′ m −1 does not contain v 1 v 2 · · · v s+1 . So we only need to consider the monomials m ′ = mv c 1 1 · · · v c s+1 s+1 with c k > 0 for every k ∈ I. For such a monomial m ′ , it can be checked using Proposition 6.7 that there exists k with c k = 2, and that the smallest such k belongs to I 0 . Let us denote it by k min . Then Proposition 6.7 shows that m ′′ := m ′ /v k min −1 appears in the right-hand side of (37), and since m ′′ does not contain v k min −1 , by what we said above, m ′′ also appears in χ q (L(m)) with the same multiplicity. Now m ′′ is (k min − 1)-dominant and ϕ k min −1 (m ′′ ) = m ′′ + m ′ is contained in χ q (L(m)) by 5.3. Moreover, the multiplicities of m ′ and m ′′ in ϕ k min −1 (m ′′ ) are the same. Hence, their multiplicities are also the same on both sides of (37). This finishes the proof of (e) and (f). This finishes the proof of 9.2 (ii) in type A. Hence Conjecture 4.6 (ii) is proved in type A. 10.3 The truth of Conjecture 4.6 in type A has the following interesting consequence, which will be used to validate Conjecture 4.6 (i) in type D. Let γ = ∑ i c i α i , with c i 0. We define Y γ ∈ M + as in 9.2. Write τ − (γ) = δ = ∑ i d i α i . Corollary 10.2 If 0 d i 2 for every i ∈ I, the truncated q-character of L(Y γ ) is equal to χ q (L(Y γ )) 2 = Y γ F δ A −1 1,ξ 1 +1 , . . . , A −1 n,ξ n +1 , where the F-polynomial F δ is given by the explicit formula (35). Proof -Let γ = ∑ α∈Φ >0 n α α be the cluster expansion of γ. By 10.2, we have χ q (L(Y γ )) 2 = ∏ α∈Φ >0 χ q (S(α)) n α 2 = Y γ ∏ α∈Φ >0 F τ − (α) (A −1 i,ξ i +1 ) n α . Since τ − is linear on ⊕ i Nα i , we have τ − (γ) = ∑ α∈Φ >0 n α τ − (α) = δ , and this is the cluster ex- . . . , v n ) n α , the proof of Proposition 7.5 given in [FZ2] shows that, since δ is 2-restricted, F δ can still be calculated by formula (35). Indeed, the two main steps (Lemmas 2.11 and 2.12) are proved for a 2-restricted vector which is not necessarily a root. pansion of δ . Now if we define F δ (v 1 , . . . , v n ) := ∏ α∈Φ >0 F τ − (α) (v 1 ,γ = α 1 + 2α 2 + α 3 . Then τ − (γ) = γ. Writing v i = A −1 i,ξ i +1 , we have χ q (L(Y 1,3 Y 2 2,0 Y 3,3 )) 2 = Y 1,3 Y 2 2,0 Y 3,3 (1 + 2v 2 + v 2 2 + v 1 v 2 + v 2 v 3 + v 1 v 2 2 + v 2 2 v 3 + v 1 v 2 2 v 3 ), where the monomials of the right-hand side are given by the combinatorial rule of Proposition 7.5. Type D In this section we take g of type D n , and we label the Dynkin diagram as in [B]. 11.1 Let β ∈ Φ >0 and let α = τ − (β ). We want to prove Theorem 11.1 The normalized truncated q-character of S(β ) is equal to χ q (S(β )) 2 = F α (v 1 , . . . , v n ), where we write for short v i = A −1 i,ξ i +1 (i ∈ I), and the F-polynomial F α is given by the combinatorial formula of Proposition 7.5. We assume that the trivalent node n − 2 belongs to I 0 (by 3.9, this is no loss of generality). We first establish some formulas in type D 4 . Lemma 11.2 Let g be of type D 4 . We have χ q (L(Y 1,3 Y 2,0 )) 2 = Y 1,3 Y 2,0 (1 + v 2 (1 + v 3 )(1 + v 4 )) , χ q (L(Y 2,0 Y 3,3 Y 4,3 )) 2 = Y 2,0 Y 3,3 Y 4,3 (1 + v 2 (1 + v 1 )) , χ q (L(Y 1,3 Y 2 2,0 Y 3,3 Y 4,3 )) 2 = Y 1,3 Y 2 2,0 Y 3,3 Y 4,3 (1 + v 2 (2 + v 1 + v 3 + v 4 ) + v 2 2 (1 + v 1 )(1 + v 3 )(1 + v 4 ) . Proof -The first two formulas concern multiplicity-free roots, and thus follow from Proposition 6.7. For the third one, taking J = {1, 2} and setting m = Y 1,0 Y 2 2,3 Y 3,0 Y 4,0 , we have by Example 6.3 (or Corollary 10.2) ϕ J (m) 2 = m 1 + v 2 (2 + v 1 ) + v 2 2 (1 + v 1 ) , and by 5.4 we know that ϕ J (m) χ q (L(m)). Replacing J by {2, 3} and by {2, 4} we get that m 1 + v 2 (2 + v 1 + v 3 + v 4 ) + v 2 2 (1 + v 1 + v 3 + v 4 ) χ q (L(m)) 2 . Now m 1 := mv 2 2 v 1 = Y 1,1 Y −1 2,2 Y 2 3,1 Y 3,3 Y 2 4,1 Y 4,3 is J-dominant for J = {3, 4}, and ϕ J (m 1 ) 2 = m 1 (1 + v 3 )(1 + v 4 ), so, by 5.4, we see that m 1 (1 + v 3 )(1 + v 4 ) χ q (L(m)). Arguing similarly with m 2 := mv 2 2 v 3 and m 3 := mv 2 2 v 4 we obtain that m 1 + v 2 (2 + v 1 + v 3 + v 4 ) + v 2 2 (1 + v 1 )(1 + v 3 )(1 + v 4 ) χ q (L(m)) 2 .(38) Conversely, we have χ q (L(m)) χ q (L(Y 2,0 ))χ q (L(Y 1,3 Y 2,0 Y 3,3 Y 4,3 ). Using Proposition 6.7, we get χ q (L(m)) 2 m (1 + v 2 (1 + v 1 )(1 + v 3 )(1 + v 4 )) (1 + v 2 ). This upper bound is equal to the lower bound of (38) plus m (v 2 v 1 v 3 + v 2 v 1 v 4 + v 2 v 3 v 4 + v 2 v 1 v 3 v 4 ) . We also have χ q (L(m)) χ q (L(Y 1,3 Y 2,0 ))χ q (Y 2,0 Y 3,3 Y 4,3 ). Using the first two formulas, we get χ q (L(m)) 2 m (1 + v 2 (1 + v 3 )(1 + v 4 )) (1 + v 2 (1 + v 1 )) . This second upper bound does not contain the monomials mv 2 v 1 v 3 and mv 2 v 1 v 3 v 4 . We can rule out the two remaining monomials by using the three-fold symmetry 1 ↔ 3 ↔ 4, and we obtain an upper bound equal to the lower bound. 2 Proof of Th. 11.1 -If β is a multiplicity-free positive root, the result follows from Theorem 7.8. So we assume that β = ∑ i b i α i has some multiplicity. In view of the list of roots for D n , this implies that b n−2 = 2, and b n−1 = b n = 1. Since n − 2 ∈ I 0 , we also have α = ∑ i a i α i with a n−2 = 2 and a n−1 = a n = 1. Consider the subgraph ∆ ′ of the Dynkin diagram supported on [1, n − 1], of type A n−1 . Set β ′ = β − α n and α ′ = α − α n . Then α ′ = τ ′ − (β ′ ), where τ ′ − is defined like τ − , but for ∆ ′ . Let Y β be the highest l-weight of S(β ). By Corollary 10.2, ϕ [1,n−1] (Y β ) = Y β F α ′ (v 1 , . . . , v n−1 ), where F α ′ is the F-polynomial for ∆ ′ , given by the explicit formula (35). Since n ∈ I 1 , this formula shows that F α ′ (v 1 , . . . , v n−1 ) = F α (v 1 , . . . , v n−1 , 0). Hence the specialization at v n = 0 of the polynomial χ q (S(β )) 2 is equal to F α (v 1 , . . . , v n−1 , 0). Similarly, the specialization at v n−1 = 0 of χ q (S(β )) 2 is equal to F α (v 1 , . . . , v n−2 , 0, v n ). Thus we are reduced to prove that the monomials of χ q (S(β )) 2 and F α (v 1 , . . . , v n ) containing both v n−1 and v n are the same and have the same coefficients. We first show that such a monomial is of the form mv 2 n−2 v n−1 v n where m is a monomial in v 1 , . . . , v n−3 . For the polynomial F α (v 1 , . . . , v n ) this follows easily from the definition of an αacceptable vector and the values of a n−2 , a n−1 , a n . For χ q (S(β )) 2 , we have χ q (S(β )) 2 χ q (S(β − 2α n−2 − α n−1 − α n )) 2 χ q (S(2α n−2 + α n−1 + α n )) 2 , where, by Proposition 6.6, χ q (S(β − 2α n−2 − α n−1 − α n )) 2 does not contain v n−2 , v n−1 , v n . Moreover, 2α n−2 + α n−1 + α n is supported on a root system of type A 3 , so by Example 10.3 and Proposition 6.6 all the monomials of χ q (S(2α n−2 + α n−1 + α n )) 2 containing v n−1 and v n are of the form m ′ v 2 n−2 v n−1 v n where m ′ is a monomial in v n−3 . We now note that if mv 2 n−2 v n−1 v n appears in F α (v 1 , . . . , v n ) (where m is a monomial in the variables v 1 , . . . , v n−3 ), then mv 2 n−2 also occurs, and with the same multiplicity. This follows again from the definition of an α-acceptable vector γ and of the integer e(γ, α). We have seen that mv 2 n−2 occurs in χ q (S(β )) 2 and F α (v 1 , . . . , v n ) with the same multiplicity. Thus all we have to do is to show that mv 2 n−2 and mv 2 n−2 v n−1 v n occur in χ q (S(β )) 2 with the same multiplicity. Let us denote these multiplicities by a and b, respectively. Since Y β mv 2 n−2 is {n − 1, n}-dominant, χ q (S(β )) 2 contains mv 2 n−2 (1 + v n−1 )(1 + v n ) with multiplicity at least a by 5.4, so a b. Conversely, let β ′′ = β − α n − α n−1 . The multiplicity a of mv 2 n−2 in χ q (S(β )) 2 is equal to its multiplicity in χ q (S(β ′′ )) 2 . This is a multiplicity in type A n−2 , so using Corollary 10.2, we can check that it coincides with the multiplicity of mv 2 n−2 in the product χ q (S(β ′′ − 2α n−2 − α n−3 )) 2 χ q (S(2α n−2 + α n−3 )) 2 or equivalently in the product χ q (S(β ′′ − 2α n−2 − α n−3 )) 2 χ q (S(2α n−2 + α n−3 + α n−1 + α n )) 2 . Using the third formula of Lemma 11.2 and the fact that the first factor contains no variable v n−2 , v n−1 , or v n , we obtain that in this product the multiplicity a of mv 2 n−2 is equal to the multiplicity of mv 2 n−2 v n−1 v n . But, by 3.4, this is greater or equal to b. Hence a b. This concludes the proof of Theorem 11.1. 2 Hence (34) is verified, and this proves Conjecture 4.6 (i) in type D n . 11.2 In this section we take g of type D 4 and we prove that Conjecture 4.6 (ii) is verified. For this we need to prove 9.2 (i) (ii) and (iii). 11.2.1 The proof that F i ⊗ F j is simple is the same as in type A n (cf. 10.2.1). We then consider F i ⊗ S(α) for α ∈ Φ −1 . If α = −α i or if α is a multiplicity-free positive root, we can repeat the argument of 10.2.1. If α = α 1 + 2α 2 + α 3 + α 4 , we choose I 1 = {2}, so that τ − (α) = α 1 + α 2 + α 3 + α 4 . By Theorem 11.1, we have χ q (S(α)) 2 = 1 + v 1 + v 3 + v 4 + v 1 v 3 + v 1 v 4 + v 3 v 4 + v 1 v 3 v 4 + v 1 v 2 v 3 v 4 . It is easy to check that this is the result given by the Frenkel-Mukhin algorithm, so we can argue again as in 10.2.1. This proves 9.2 (i). 11.2.2 Let α, β ∈ Φ −1 be two compatible roots. We want to show that S(α)⊗ S(β ) is simple. If α or β is negative, we can repeat the argument of 10.2.3 (a) (b). So let us suppose that α, β ∈ Φ >0 . If the union of the supports of α and β is strictly smaller than I, we can assume by symmetry that it is contained in {1, 2, 3}. Take I 1 = {2}. By Proposition 6.6, we then have χ q (S(α)) 2 = ϕ {1,2,3} (Y α ) 2 , χ q (S(β )) 2 = ϕ {1,2,3} (Y β ) 2 , that is, our q-characters are in fact of type A 3 . On the other hand, α and β are also compatible as roots of type A 3 , hence the equality χ q (S(α)) 2 χ q (S(β )) 2 = χ q (S(α) ⊗ S(β )) 2 follows from 10.2.3. We are thus reduced to those pairs of compatible positive roots (α, β ) whose union of supports is equal to I. By [FZ2,Prop. 3.16], these are, up to the 3-fold symmetry 1 ↔ 3 ↔ 4, (a) α = α 1 , β = α 1 + α 2 + α 3 + α 4 ; (b) α = α 1 + α 2 , β = α 1 + 2α 2 + α 3 + α 4 ; (c) α = α 1 + α 2 + α 3 , β = α 2 + α 3 + α 4 ; (d) α = α 1 + α 2 + α 3 , β = α 1 + α 2 + α 3 + α 4 ; (e) α = α 1 + α 2 + α 3 , β = α 1 + 2α 2 + α 3 + α 4 ; (f) α = β = α 1 + α 2 + α 3 + α 4 ; (g) α = β = α 1 + 2α 2 + α 3 + α 4 . (a) Take I 0 = {2}. Then χ q (S(α)) 2 = Y 1,3 and, since τ − (β ) = α 2 we have χ q (S(β )) 2 = Y 1,3 Y 2,0 Y 3,1 Y 4,1 (1 + v 2 ), hence χ q (S(α) ⊗ S(β )) 2 contains a unique dominant monomial and S(α) ⊗ S(β ) is simple. (b) Take I 1 = {2}. Then, by Theorem 11.1, χ q (S(α) ⊗ S(β )) 2 = Y α Y β (1 + v 1 )(1 + v 1 + v 3 + v 4 + v 1 v 3 + v 1 v 4 + v 3 v 4 + v 1 v 3 v 4 + v 1 v 2 v 3 v 4 ). The only dominant monomials in this product are Y α Y β and Y α Y β v 1 v 2 v 3 v 4 , hence it is enough to show that Y α Y β v 1 v 2 v 3 v 4 occurs in χ q (L(Y α Y β )). Now Y α Y β = Y 2 1,0 Y 3 2,3 Y 3,0 Y 4,0 , and clearly m := Y α Y β v 4 = Y 2 1,0 Y 2,1 Y 3 2,3 Y 3,0 Y −1 4,2 occurs in χ q (L(Y α Y β )). Since m does not contain v 1 , v 2 , v 3 , we know by 5.4 that ϕ {1,2,3} (m) is contained in χ q (L(Y α Y β )). To calculate ϕ {1,2,3} (m), we write m = Y γ (Y 2,1 Y 2,3 )Y −1 4,2 where γ = 2α 1 + 2α 2 + α 3 is in the root lattice of type A 3 and has the cluster expansion γ = (α 1 + α 2 ) + (α 1 + α 2 + α 3 ). It then follows from Section 10 that ϕ {1,2,3} (m) = m(1 + v 1 )(1 + v 1 + v 3 + v 1 v 3 + v 1 v 2 v 3 ), hence mv 1 v 2 v 3 = Y α Y β v 1 v 2 v 3 v 4 occurs in χ q (L(Y α Y β )). (c) Take I 0 = {2}. Then τ − (α) = α 2 + α 4 and τ − (β ) = α 1 + α 2 , so χ q (S(α) ⊗ S(β )) 2 = Y α Y β (1 + v 2 + v 2 v 4 )(1 + v 2 + v 1 v 2 ). But by Section 10, this is equal to ϕ {1,2,4} (Y α Y β ), which is contained in χ q (L(Y α Y β )). (d) Take I 0 = {2}. Then τ − (α) = α 2 + α 4 and τ − (β ) = α 2 , so one can argue as in (c). (e) Take I 1 = {2}. Then, by Theorem 11.1, χ q (S(α) ⊗ S(β )) 2 = Y α Y β (1 + v 1 + v 3 + v 1 v 3 + v 1 v 2 v 3 ) ×(1 + v 1 + v 3 + v 4 + v 1 v 3 + v 1 v 4 + v 3 v 4 + v 1 v 3 v 4 + v 1 v 2 v 3 v 4 ). The only dominant monomials other than Y α Y β are Y α Y β v 1 v 2 v 3 , Y α Y β v 1 v 2 v 3 v 4 which occurs with coefficient 2, and Y α Y β v 2 1 v 2 2 v 2 3 v 4 , hence it is enough to show that they occur in χ q (L(Y α Y β )) . For the first one, which does not depend on v 4 , one can argue as in (c) or (d). Now Y α Y β = Y 2 1,0 Y 3 2,3 Y 2 3,0 Y 4,0 , and clearly m := Y α Y β v 4 = Y 2 1,0 Y 2,1 Y 3 2,3 Y 2 3,0 Y −1 4,2 occurs in χ q (L(Y α Y β )). Since m does not contain v 1 , v 2 , v 3 , we know by 5.4 that ϕ {1,2,3} (m) is contained in χ q (L(Y α Y β )). To calculate ϕ {1,2,3} (m), we write m = Y γ (Y 2,1 Y 2,3 )Y −1 4,2 where γ = 2α 1 +2α 2 +2α 3 is in the root lattice of type A 3 and has the cluster expansion γ = 2(α 1 + α 2 + α 3 ). It then follows from Section 10 that ϕ {1,2,3} (m) = m(1 + v 1 + v 3 + v 1 v 3 + v 1 v 2 v 3 ) 2 , hence mv 1 v 2 v 3 = Y α Y β v 1 v 2 v 3 v 4 and mv 2 1 v 2 2 v 2 3 = Y α Y β v 2 1 v 2 2 v 2 3 v 4 occur in χ q (L(Y α Y β )), the first one with coefficient 2. (f) Take I 0 = {2}. Then χ q (S(α)) 2 = Y α (1 + v 2 ) , and its square has only one dominant monomial, namely (Y α ) 2 . (g) Take I 1 = {2}. Then, by Theorem 11.1, χ q (S(α) ⊗2 ) 2 = (Y α ) 2 (1 + v 1 + v 3 + v 4 + v 1 v 3 + v 1 v 4 + v 3 v 4 + v 1 v 3 v 4 + v 1 v 2 v 3 v 4 ) 2 . The only dominant monomials other than (Y α ) 2 are (Y α ) 2 v 1 v 2 v 3 v 4 which occurs with coefficient 2, and (Y α ) 2 v 2 1 v 2 2 v 2 3 v 2 4 , hence it is enough to show that they occur in χ q (L((Y α ) 2 )). Now m 1 := (Y α ) 2 v 4 and m 2 := (Y α ) 2 v 2 4 both occur in χ q (L((Y α ) 2 )), the first one with coefficient 2, and both are {1, 2, 3}-dominant. Considering ϕ {1,2,3} (m 1 ) and ϕ {1,2,3} (m 2 ) we can conclude as in (e). This finishes the proof of 9.2 (ii). 11.2.3 Finally 9.2 (iii) follows from Corollary 6.10 for all multiplicity-free roots. It remains to check it for the longest root α = α 1 + 2α 2 + α 3 + α 4 . This can be done easily using our explicit formulas for the truncated q-characters. For example we see from Lemma 11.2 that the monomial (S(α)). All other factorizations of S(α) can be ruled out in a similar way, and we omit the details. Y 1,3 Y 2 2,0 Y 3,3 Y 4,3 v 2 v 3 v 4 occurs in χ q (L(Y 1,3 Y 2,0 ))χ q (L(Y 2,0 Y 3,3 Y 4,3 )) but not in χ q This concludes the proof of Conjecture 4.6 in type D 4 . 2 Applications In this section we present some interesting consequences of our results concerning C 1 . 12.1 By construction, the cluster variables of a cluster algebra satisfy some algebraic identities coming from the mutation procedure. When we restrict to mutations on the bipartite belt [FZ5] these identities are similar to a T -system. In the case ℓ = 1 this T -system is periodic and involves the classes of all the cluster simple objects of C 1 , as we shall now see. Define τ = τ + τ − (see 7.1). For every i ∈ I, define a sequence γ i ( j) ( j ∈ Z) of elements of Φ −1 by γ i (2 j) = τ j (−α i );(39) for odd j, γ i ( j) is defined via the parity condition γ i ( j) = γ i ( j + 1) if ε i = (−1) j+1 .(40) It follows from [FZ2] that every element of Φ −1 is of the form γ i ( j) for some i ∈ I and some j ∈ [0, h + 2]. For every i ∈ I, define a sequence β i ( j) ( j ∈ Z) of elements of Φ −1 by the initial condition β i (0) = α i ,(41) and the recursion β i ( j + 1) = τ (−1) j+1 (β i ( j)), ( j ∈ Z).(42) Define the following monomials in the classes of the frozen simple objects F i : p + i ( j) = ∏ k∈I [F k ] max(0,[β k ( j):α i ]) , p − i ( j) = ∏ k∈I [F k ] max(0,−[β k ( j):α i ]) , (i ∈ I, j ∈ Z).(43) The next result follows from Conjecture 4.6 (i), hence it is now proved if g is of type A n or D n . Theorem 12.1 The cluster simple objects S(α) (α ∈ Φ −1 ) of C 1 satisfy the following system of equations in the Grothendieck ring R 1 [S(γ i ( j + 1))] [S(γ i ( j − 1))] = p + i ( j) + p − i ( j) ∏ k =i [S(γ k ( j))] −a ik , (i ∈ I, j ∈ Z). Proof -Set p i ( j) = p + i ( j) p − i ( j) = ∏ k∈I [F k ] [β k ( j):α i ] ,(44) a Laurent monomial in the [F i ]. We will first show that p i ( j + 1)p i ( j − 1) = ∏ k =i (p + k ( j)) −a ik .(45) To do that, set q i ( j) = ∏ k =i (p + k ( j)) −a ik p i ( j + 1)p i ( j − 1) . For i ∈ I, define following [FZ2] a piecewise-linear automorphism σ i of Q by [σ i (γ) : α j ] =        [γ : α j ] if j = i, −[γ : α i ] − ∑ k =i a ik max(0, [γ : α k ]) if j = i.(46) Then τ ε = ∏ ε i =ε σ i . Using the definition of p ± i ( j) one can see that the exponent of [F u ] in q i ( j) is equal to [σ i (β u ( j)) + β u ( j) − β u ( j + 1) − β u ( j − 1) : α i ] = [σ i (β u ( j)) + β u ( j) − τ (−1) j+1 (β u ( j)) − τ (−1) j (β u ( j)) : α i ]. Suppose that ε i = (−1) j+1 . Then σ i appears once in τ (−1) j+1 and does not appear in τ (−1) j . It follows that [τ (−1) j+1 (β u ( j)) + τ (−1) j (β u ( j)) : α i ] = [σ i (β u ( j)) + β u ( j) : α i ], hence, the exponent of [F u ] in q i ( j) is 0. The case ε i = (−1) j is identical. Thus, q i ( j) = 1 and (45) is proved. Set y i ( j) = 1/p i ( j). Then, using the tropical semifield structure on the set of Laurent monomials in the [F i ] (see 7.2), one has 1 ⊕ y i ( j) = 1/p + i ( j), hence p + i ( j) = 1 1 ⊕ y i ( j) , p − i ( j) = y i ( j) 1 ⊕ y i ( j) . With this new notation, (45) becomes y i ( j + 1)y i ( j − 1) = ∏ k =i (1 ⊕ y k ( j)) −a ik ,(47) and the relation of Theorem 12.1 takes the form [S(γ i ( j + 1))] [S(γ i ( j − 1))] = 1 + y i ( j) ∏ k =i [S(γ k ( j))] −a ik 1 ⊕ y i ( j) .(48) Equations (47) and (48) coincide with formulas (8.11) and (8.12) of [FZ5], which proves the theorem. 2 Example 12.2 We take g of type A 3 and choose I 0 = {1, 3}, I 1 = {2}. Hence τ + = σ 1 σ 3 , τ − = σ 2 , and τ = σ 1 σ 3 σ 2 . We have γ 1 (0) = −α 1 , γ 2 (0) = −α 2 , γ 3 (0) = −α 3 , γ 1 (1) = α 1 , γ 2 (1) = −α 2 , γ 3 (1) = α 3 , γ 1 (2) = α 1 , γ 2 (2) = α 1 + α 2 + α 3 , γ 3 (2) = α 3 , γ 1 (3) = α 2 + α 3 , γ 2 (3) = α 1 + α 2 + α 3 , γ 3 (3) = α 1 + α 2 , γ 1 (4) = α 2 + α 3 , γ 2 (4) = α 2 , γ 3 (4) = α 1 + α 2 , γ 1 (5) = −α 3 , γ 2 (5) = α 2 , γ 3 (5) = −α 1 , γ 1 (6) = −α 3 , γ 2 (6) = −α 2 , γ 3 (6) = −α 1 , and β 1 (0) = α 1 , β 2 (0) = α 2 , β 3 (0) = α 3 , β 1 (1) = α 1 + α 2 , β 2 (1) = −α 2 , β 3 (1) = α 2 + α 3 , β 1 (2) = α 2 + α 3 , β 2 (2) = −α 2 , β 3 (2) = α 1 + α 2 , β 1 (3) = α 3 , β 2 (3) = α 2 , β 3 (3) = α 1 , β 1 (4) = −α 3 , β 2 (4) = α 1 + α 2 + α 3 , β 3 (4) = −α 1 , β 1 (5) = −α 3 , β 2 (5) = α 1 + α 2 + α 3 , β 3 (5) = −α 1 , β 1 (6) = α 3 , β 2 (6) = α 2 , β 3 (6) = α 1 . The formulas of Theorem 12.1 read for i = 1, 3: [S(α 1 )] [S(−α 1 )] = [F 1 ] + [S(−α 2 )], [S(α 2 + α 3 )] [S(α 1 )] = [F 3 ] + [S(α 1 + α 2 + α 3 )], [S(−α 3 )] [S(α 2 + α 3 )] = [F 2 ] + [F 3 ] [S(α 2 )], [S(α 3 )] [S(−α 3 )] = [F 3 ] + [S(−α 2 )], [S(α 1 + α 2 )] [S(α 3 )] = [F 1 ] + [S(α 1 + α 2 + α 3 )], [S(−α 1 )] [S(α 1 + α 2 )] = [F 2 ] + [F 1 ] [S(α 2 )]. and for i = 2: [S(α 1 + α 2 + α 3 )] [S(−α 2 )] = [F 1 ][F 3 ] + [F 2 ] [S(α 1 )] [S(α 3 )], [S(α 2 )] [S(α 1 + α 2 + α 3 )] = [F 2 ] + [S(α 1 + α 2 )] [S(α 2 + α 3 )], [S(−α 2 )] [S(α 2 )] = [F 2 ] + [S(−α 1 )] [S(−α 3 )]. Remark 12.3 In type E n , using Proposition 6.7, we can still prove identities involving only multiplicity-free roots. For example, one can show that for every i ∈ I, [S(α i )][S(−α i )] = [F i ] + ∏ j =i [S(−α j )] −a i j , [S(α i − ∑ j =i a i j α j )][S(−α i )] = ∏ j =i [F j ] −a i j + [F i ] ∏ j =i [S(α j )] −a i j . The first formula is a classical T -system, but not the second one. 12.2 If g is of type A n , we have the following well-known duality for the characters of the finitedimensional irreducible g-modules. If λ = (λ 1 , . . . , λ n ) and µ = (µ 1 , . . . , µ n ) are two dominant weights, identified in the standard way with partitions in at most n parts, there holds dimV (λ ) µ = [V (µ 1 ) ⊗ · · · ⊗V (µ n ) : V (λ )], where V (λ ) stands for the irreducible g-module with highest weight λ . In plain words, the weight multiplicities of V (λ ) coincide with the multiplicities of V (λ ) as a direct summand of certain tensor products. This is sometimes called Kostka duality and is specific to type A. We find it interesting to note that Eq. (34) yields a similar duality for the truncated q-characters of the cluster simple objects S(β ) for any g. Indeed assume that (34) holds, namely that χ q (S(β )) 2 = F τ − (β ) (v 1 , . . . , v n ), (v i = A −1 i,ξ i +1 ), (this is proved in type A and D and for all multiplicity-free roots in type E). Then writing for short α = τ − (β ) = ∑ i a i α i , v γ = ∏ i∈I v c i i for γ = ∑ i c i α i ∈ Q, and F α (v 1 , . . . , v n ) = ∑ γ n β ,γ v γ , we have that n β ,γ is the multiplicity of the l-weight Y β v γ in S(β ). On the other hand, define T (α) := i∈I S(−ε i α i ) ⊗a i . Note that χ q (T (α)) 2 = ∏ i Y a i i,ξ i +2 is reduced to a single monomial, so T (α) is simple. (From the cluster algebra point of view, (x[−ε i α i ]; i ∈ I) is a cluster of A .) Put d i =        − ∑ j =i c j a i j if i ∈ I 0 , ∑ j =i (c j − a j )a i j if i ∈ I 1 , , e i =        a i − c i if i ∈ I 0 , −c i − ∑ j =i c j a i j if i ∈ I 1 , . If n β ,γ = 0 then d i and e i are nonnegative and we can consider the simple module U (γ) := i∈I S(−ε i α i ) ⊗d i ⊗ F ⊗e i i . Proposition 12.4 Assume that Eq. (34) holds for S(β ). Then the multiplicity of U (γ) as a composition factor of the tensor product S(β ) ⊗ T (α) is equal to the l-weight multiplicity n β ,γ . Proof -This is a direct calculation using Eq. (27) However, our results have geometric consequences. Indeed, by work of Fu and Keller [FK], the F-polynomials have nice geometric descriptions in terms of quiver grassmannians. This goes as follows. Let C be the oriented graph obtained from the Dynkin diagram of g by deciding that the vertices in I 1 are sources and those in I 0 are sinks. So C is a Dynkin quiver, and we can associate to every positive root α the unique (up to isomorphism) indecomposable representation M[α] of C over C with dimension vector α. Regarding an element γ = ∑ i c i α i of the root lattice with nonnegative coordinates c i as a dimension vector for C, we can consider for every representation M of C the quiver grassmannian Gr γ (M) := {N | N is a subrepresentation of M with dimension γ}. This is a closed subset of the ordinary grassmannian of subspaces of dimension ∑ i c i of the complex vector space M. So in particular, Gr γ (M) is a projective variety. Denote by χ(Gr γ (M)) its topological Euler characteristic. Then we have the following formula, inspired from a similar formula of Caldero and Chapoton for cluster expansions of cluster variables [CC]. Theorem 12.5 [FK,Th. 6.5] For α ∈ Φ >0 we have F α (v 1 , . . . , v n ) = ∑ γ χ(Gr γ (M[α])) v γ . This yields immediately Theorem 12.6 If Eq. (34) holds for β ∈ Φ −1 , then χ q (S(β )) 2 = Y β ∑ γ χ(Gr γ (M[τ − (β )])) v γ , (v i = A −1 i,ξ i +1 ).(49) Example 12.7 Take g of type D 4 and choose I 0 = {2}, so that C is the quiver of type D 4 with its three arrows pointing to the trivalent node 2. Let β = α 1 + 2α 2 + α 3 + α 4 be the highest root. We have τ − (β ) = β . The representation M[β ] of C is of dimension 5. There are thirteen non-empty quiver grassmannians corresponding to the dimension vectors (0, 0, 0, 0), (0, 1, 0, 0), (0, 2, 0, 0), (1, 1, 0, 0), (0, 1, 1, 0), (0, 1, 0, 1), (1, 2, 0, 0), (0, 2, 1, 0), (0, 2, 0, 1), (1, 2, 1, 0), (1, 2, 0, 1), (0, 2, 1, 1), (1, 2, 1, 1). The variety Gr (0,1,0,0) (M[β ]) is a projective line, hence its Euler characteristic is equal to 2. The twelve other grassmannians are reduced to a point. Therefore we obtain that χ q (S(β )) 2 = Y 1,3 Y 2 2,0 Y 3,3 Y 4,3 1 + 2v 2 + v 2 2 + v 1 v 2 + v 2 v 3 + v 2 v 4 + v 1 v 2 2 + v 2 2 v 3 + v 2 2 v 4 + v 1 v 2 2 v 3 + v 1 v 2 2 v 4 + v 2 2 v 3 v 4 + v 1 v 2 2 v 3 v 4 , in agreement with Lemma 11.2. Note that if moreover Conjecture 4.6 (ii) holds, then we can write any simple module L(m) in C 1 as a tensor product of cluster simple objects. Taking into account the additivity properties of the Euler characteristics and the results of [CK], this gives for χ q (L(m)) 2 a formula similar to (49), in which the indecomposable representation M[τ − (β )] is replaced by a generic representation of C (or equivalently a representation without self-extension). Example 12.8 Take g of type A 2 and choose I 0 = {1}, so that C is the quiver 1 ←− 2. Consider the simple module S = L(Y 2 1,0 Y 2,3 ). We have seen that S ∼ = L(Y 1,0 Y 2,3 ) ⊗ L(Y 1,0 ) . This corresponds to the fact that the generic representation of C of dimension vector 2α 1 + α 2 is M = (C id ←− C) ⊕ (C 0 ←− 0). There are five non-empty quiver grassmannians for M corresponding to the dimension vectors (0, 0), (1, 0), (2, 0), (1, 1), (2, 1). The variety Gr (1,0) (M) is a projective line, hence its Euler characteristic is equal to 2. The four other grassmannians are reduced to a point. Therefore we obtain that χ q (S) 2 = Y 2 1,0 Y 2,3 1 + 2v 1 + v 2 1 + v 1 v 2 + v 2 1 v 2 , in agreement with Example 6.3. Theorem 12.6 is very similar to a formula of Nakajima [N1, §13] for the q-character of a standard module. Indeed, as shown by Lusztig [Lu1], the lagrangian quiver varieties used in Nakajima's character formula are isomorphic to grassmannians of submodules of a projective module over a preprojective algebra. There are however two important differences. In our case the geometric formula gives only the truncated q-character (but this is enough to determine the full q-character of an object of C 1 ). More importantly, Theorem 12.6 concerns simple modules and not standard modules. In Nakajima's approach, the q-characters of the simple modules are obtained as alternating sums of q-characters of standard modules using intersection cohomology methods. General ℓ We now consider the category C ℓ for an arbitrary integer ℓ. Type of g ℓ Type of A ℓ A 1 ℓ A ℓ X n 1 X n A 2 2 D 4 A 2 3 E 6 A 2 4 E 8 A 3 2 E 6 A 4 2 E 8 13.1 We define a quiver Γ ℓ with vertex set {(i, k) | i ∈ I, 1 k ℓ + 1}. The arrows of Γ ℓ are given by the following rule. Suppose that (i, k) is such that i ∈ I 0 and k is odd, or i ∈ I 1 and k is even. Then the arrows adjacent to (i, k) are (h) the horizontal arrows (i, k − 1) → (i, k) if k > 1 and (i, k + 1) → (i, k) if k ℓ; (v) the vertical arrows (i, k) → ( j, k) where a i j = −1 and k ℓ. All arrows are of this type. Let B ℓ be the n(ℓ+1)×nℓ-matrix with set of column indices I ×[1, ℓ] and set of row indices I × [1, ℓ + 1]. The entry b (i,k),( j,m) is equal to 1 if there is an arrow from ( j, m) to (i, k) in Γ ℓ , to −1 if there is an arrow from (i, k) to ( j, m), and to 0 otherwise. Let A ℓ = A ( B ℓ ) be the cluster algebra attached to the initial seed (x, B ℓ ), where x = (x (i,k) | i ∈ I, 1 k ℓ + 1). This is a cluster algebra of rank nℓ, with n frozen variables f i := x (i,ℓ+1) (i ∈ I). It follows easily from [FZ3] that A ℓ has in general infinitely many cluster variables. The exceptional pairs (g, ℓ) for which A ℓ has finite cluster type are listed in Table 1. 13.3 For i ∈ I and k ∈ [1, ℓ + 1], define 13.2 r(i, k) =          2 ℓ − k + 1 2 if i ∈ I 0 , 2 ℓ − k + 2 2 − 1 if i ∈ I 1 , where ⌈x⌉ denotes the smallest integer x. These integers satisfy (a) r(i, k) r(i, k + 1) r(i, k + 2) = r(i, k) − 2, (b) if a i j = −1 then r( j, k) is the unique integer strictly between r(i, k) and r(i, k) + 2(−1) k ε i . Recall from 3.10 the Kirillov-Reshetikhin modules W (i) k,a (i ∈ I, k ∈ N, a ∈ C * ). The Kirillov-Reshetikhin modules in C ℓ have spectral parameters of the form a = q r for some integer r between 0 and ℓ + 1. To simplify notation we shall write W (i) k,r instead of W (i) k,q r . We can now state our main conjecture, which generalizes Conjecture 4.6 to arbitrary ℓ. Conjecture 13.2 The map x (i,k) → [W (i) k,r(i,k) ] extends to a ring isomorphism ι from the cluster algebra A ℓ to the Grothendieck ring R ℓ of C ℓ . If we identify A ℓ with R ℓ via ι, C ℓ becomes a monoidal categorification of A ℓ . The idea to choose this initial seed for A ℓ comes from the T -systems. Indeed, after replacing x (i,k) by [W (i) k,r(i,k) ], the exchange relations (2) for the initial cluster variables become, [CP2] that Conjecture 13.2 holds for g = sl 2 and any ℓ (see Example 5.2). In this case A ℓ ≡ R ℓ is a cluster algebra of finite cluster type A ℓ , with one frozen variable [W ℓ+1,0 ]. The cluster variables are the classes of the other Kirillov-Reshetikhin modules of C ℓ , namely [W k,2s ], It follows from the work of Chari and Pressley (1 k ℓ, 0 s ℓ − k + 1). To determine the compatible pairs of cluster variables, one can use again the geometric model of [FZ2] and attach to each cluster variable a diagonal of the (ℓ + 3)-gon P ℓ+3 as follows: [W k,2s ] −→ [s + 1, s + k + 2]. We then have that W k,2s ⊗W k ′ ,2s ′ is simple if and only if the corresponding diagonals do not intersect in the interior of P ℓ+3 . 13.5 Take g of type A 2 and ℓ = 2. We choose I 0 = {1} and I 1 = {2}. In this case Conjecture 13.2 holds (see below 13.8). The cluster algebra A 2 has finite cluster type D 4 , hence every simple object of C 2 is isomorphic to a tensor product of cluster simple objects and frozen simple objects. The sixteen cluster simple objects are L(Y 1,0 ), L(Y 1,2 ), L(Y 1,4 ), L(Y 2,1 ), L(Y 2,3 ), L(Y 2,5 ), L(Y 1,0 Y 1,2 ), L(Y 1,2 Y 1,4 ), L(Y 2,1 Y 2,3 ), L(Y 2,3 Y 2,5 ), L(Y 1,0 Y 2,3 ), L(Y 1,2 Y 2,5 ), L(Y 1,4 Y 2,1 ), L(Y 1,0 Y 1,2 Y 2,5 ), L(Y 1,0 Y 2,3 Y 2,5 ), L(Y 1,0 Y 1,2 Y 2,3 Y 2,5 ). They have respective dimensions 3, 3, 3, 3, 3, 3, 6, 6, 6, 6, 8, 8, 8, 15, 15, 35. Note that only the first ten are Kirillov-Reshetikhin modules. The next five are evaluation modules. The last module L(Y 1,0 Y 1,2 Y 2,3 Y 2,5 ) is not an evaluation module. Its restriction to U q (g) is isomorphic to V (2ϖ 1 + 2ϖ 2 ) ⊕V (ϖ 1 + ϖ 2 ). The two frozen modules are the Kirillov-Reshetikhin modules L(Y 1,0 Y 1,2 Y 1,4 ), L(Y 2,1 Y 2,3 Y 2,5 ), of dimension 10. By [FZ3] there are fifty clusters, i.e. fifty factorization patterns of a simple object of C 2 as a tensor product of cluster simple objects and frozen simple objects. 13.6 Take g of type A 4 and ℓ = 3. In this case A 3 has infinitely many cluster variables. Choose I 0 = {1, 3} and I 1 = {2, 4}. Thus the simple module L(Y 1,4 Y 2,1 Y 2,7 Y 3,4 ) belongs to C 3 . However, it is not a real simple object [L, §4.3] because in the Grothendieck ring R 3 we have [L(Y 1,4 Y 2,1 Y 2,7 Y 3,4 )] 2 = [L(Y 2 1,4 Y 2 2,1 Y 2 2,7 Y 2 3,4 )] + [L(Y 2,1 Y 2,3 Y 2,5 Y 2,7 Y 4,3 Y 4,5 )]. If C 3 is indeed a monoidal categorification of A 3 , then [S] cannot be a cluster monomial. For g of type A 3 and ℓ = 3, there is a similar example. The simple module L(Y 1,4 Y 2,1 Y 2,7 Y 3,4 ) belongs to C 3 , and we have [L(Y 1,4 Y 2,1 Y 2,7 Y 3,4 )] 2 = [L(Y 2 1,4 Y 2 2,1 Y 2 2,7 Y 2 3,4 )] + [L(Y 2,1 Y 2,3 Y 2,5 Y 2,7 )]. We expect the existence of non real simple objects in C ℓ whenever A ℓ does not have finite cluster type. 13.7 Take g of type A n and let ℓ be arbitrary. Let us sketch why in this case Conjecture 13.2 would essentially follow from a conjecture of [GLS2] (see also [GLS4,§23.1]). First, we can use the quantum affine analogue of the Schur-Weyl duality [CP5, Che, GRV] to relate the finite-dimensional representations of U q ( g) with the finite-dimensional representations of the affine Hecke algebras H m (t) of type A m (m 1) with parameter t = q 2 . More precisely, for every m we have a functor F m from mod H m (t) to the category C of finite-dimensional representations of U q ( g), which maps every simple module of H m (t) to a simple module of U q ( g) or to the zero module. The simple U q ( g)-module with highest l-weight ∏ n i=1 ∏ k i r=1 Y i,a i is the image by F m of a simple H m (t)-module, where m = ∑ i ik i . Moreover, the functors F m are multiplicative in the following sense: for M 1 in mod H m 1 (t) and M 2 in mod H m 2 (t) one has F m 1 +m 2 (M 1 ⊙ M 2 ) = F m 1 (M 1 ) ⊗ F m 2 (M 2 ) , where − ⊙ − denotes the induction product from mod H m 1 (t) × mod H m 2 (t) to mod H m 1 +m 2 (t). Let R be the sum over m of the Grothendieck groups of the categories mod H m (t) endowed with the multiplication induced by ⊙. The functors F m thus induce a surjective ring homomorphism Ψ : R → R, which maps classes of simples to classes of simples or to zero. Let D m,ℓ denote the full subcategory of mod H m (t) whose objects are those modules on which the generators y 1 , . . . , y m of the maximal commutative subalgebra of H m (t) have all their eigenvalues in t k | k ∈ Z, 1 − n 2 k n 2 + ℓ , (see [L]). It is easy to check that every simple object of C ℓ is of the form F m (M) for some m and some simple object M of D m,ℓ . Therefore, denoting by R ℓ the sum over m of the Grothendieck groups of the categories D m,ℓ , we see that Ψ restricts to a surjective ring homomorphism from R ℓ to R ℓ . By a dual version of Ariki's theorem [A, LNT] the Z-basis of R ℓ given by the classes of the simple objects can be identified with the dual canonical basis of the coordinate ring C[N] of a maximal unipotent subgroup N of SL n+ℓ+1 (C). So, to summarize, for g of type A n , Conjecture 13.2 can be reformulated as a conjecture about multiplicative properties of the dual canonical basis of C [N]. In [GLS2], a cluster algebra structure on C[N] has been studied in relation with the representation theory of preprojective algebras. It was shown that the cluster monomials belong to the dual of Lusztig's semicanonical basis of C[N] [Lu2]. More precisely they are the elements parametrized by the irreducible components of the nilpotent varieties with an open orbit. It was also conjectured that these elements of the dual semicanonical basis belong to the dual canonical basis, hence, by Ariki's theorem, are classes of irreducible representations of some H m . Finally, one can check that the initial seed of the cluster algebra A ℓ given by Conjecture 13.2 is the image under Ψ of a seed of C[N] ≡ R ℓ . So if the conjecture of [GLS2] was proved, by applying Ψ we would deduce that all cluster monomials of A ℓ are classes of simple objects of C ℓ . To finish the proof of Conjecture 13.2 one would still have to explain why all classes of real simple objects are cluster monomials. 13.8 In [GLS1] it was shown that if N is of type A r (r 4) the dual canonical and dual semicanonical basis of C[N] coincide. Moreover these are the only cases for which C[N] has finite cluster type. It then follows from 13.7 that Conjecture 13.2 holds if n + ℓ 4, and moreover in this case all simple objects of C ℓ are real. This proves the conjecture for g of type A 2 and ℓ = 2 (see 13.5). 13.9 For g of type A n , there is an interesting relation between the cluster algebra A ℓ and the grassmannian Gr(n + 1, n + ℓ + 2) of (n + 1)-dimensional subspaces of C n+ℓ+2 . Indeed, the homogeneous coordinate ring C[Gr(n + 1, n + ℓ + 2)] has a cluster algebra structure [S] with an initial seed given by a similar rectangular lattice (see also [GSV,GLS3]). More precisely, denote the Plücker coordinates of C[Gr(n + 1, n + ℓ + 2)] by [i 1 , . . . , i n+1 ], (1 i 1 < · · · < i n+1 n + ℓ + 2). The ℓ + 2 Plücker coordinates [1, 2, . . . , n + 1], [2, 3, . . . , n + 2], . . . , [ℓ + 2, ℓ + 3, . . . , n + ℓ + 2], belong to the subset of frozen variables of the cluster algebra C[Gr(n + 1, n + ℓ + 2)]. Hence, the quotient ring S ℓ of C[Gr(n + 1, n + ℓ + 2)] obtained by specializing these variables to 1 is also a cluster algebra, with the same principal part. By comparing the initial seed of A ℓ with the initial seed of S ℓ obtained from [S, §4, §5], we see immediately that these two cluster algebras are isomorphic. So we can reformulate Conjecture 13.2 for g = sl n+1 by stating that C ℓ should be a monoidal categorification of the quotient ring S ℓ of C[Gr(n + 1, n + ℓ + 2)] by the relations [1, 2, . . . , n + 1] = [2, 3, . . . , n + 2] = · · · = [ℓ + 2, ℓ + 3, . . . , n + ℓ + 2] = 1. Note that for g = sl 2 we recover the situation of §13.4. Note also that [GLS3] provides an additive categorification of the cluster algebra C[Gr(n + 1, n + ℓ + 2)], as a Frobenius subcategory of the module category of a preprojective algebra of type A n+ℓ+1 . Example 13.3 Take n = 2 and ℓ = 2 (see §13.5). In this case S 2 is the ring obtained from C[Gr(3, 6)] by quotienting the following relations For simplicity, we denote again by [i, j, k] the image of the Plücker coordinate in the quotient S 2 . Then the identification of the Grothendieck ring R 2 with S 2 gives the following identities of cluster S * denote the dual of S. The Drinfeld polynomials of S * are easily deduced from those of S, namely π i,S * (u) = π i ∨ ,S (q −h u), (i ∈ I), Corollary 6. 5 5Let S = L(m) be a simple object of C 1 . Suppose that FM(m) χ q (S) 2 . Then FM(m) = χ q(S). satisfied by the Kirillov-Reshetikhin modules S(α i ) and S(−α i ), we obtain immediately ι( f i ) = [F i ], (i ∈ I). The positive roots are all multiplicity-free, labelled by the subintervals of I. For [i, j] ⊂ I, we denote by α [i, j] := ∑ j k=i α k the corresponding positive root. By Theorem 7.8, Eq. (34) is verified for all positive roots. This and (33) proves Conjecture 4.6 (i) in type A. Figure 1 : 1The snake in type A 3 . 2 Example 10. 3 23Let g be of type A 3 and take I 0 = {2} and I 1 = {1, 3}. Choose Example 13. 1 1Take g of type A 3 and choose I 0 = {1, 3} and I 1 = {2}. The quiver Γ 3 is then(1, 1) ← (1, 2) → (1, 3) ← (1, 4) ↓ ↑ ↓ (2, 1) → (2, 2) ← (2, 3) → (2, 4) ↑ ↓ ↑ (3, 1) ← (3, 2) → (3, 3) ← (3, 4) ∈ I, k ℓ),which is an instance of Eq. (5). [ 1 , 12, 3] = [2, 3, 4] = [3, 4, 5] = [4, 5, 6] = 1. Definition 2.1 Let A be a cluster algebra and let M be an abelian monoidal category. We say that M is a monoidal categorification of A if the Grothendieck ring of M is isomorphic to A , and if(i) the cluster monomials of A are the classes of all the real simple objects of M ;(ii) the cluster variables of A (including the frozen ones) are the classes of all the real prime simple objects of M . existence of a monoidal categorification of a cluster algebra is a very strong property, as shown by the following result.Proposition 2.2 Suppose that the cluster algebra A has a monoidal categorification M . Then(i) every cluster variable of A has a Laurent expansion with positive coefficients with respect to any cluster;(ii) the cluster monomials of A are linearly independent.The Proof -If m is a cluster monomial, we denote by S(m) the simple object with class [S(m)] = m. Let z be a cluster variable, and let For g = sl 2 , we have I = {1}, and we may drop the index i in [WUsing these equations, one can calculate inductively the expression of any [W (i) k,a ] as a polynomial in the classes [W (i) 1,a ] of the fundamental modules. Example 3.4 (i) k,a ]. The T -system reads [W k,a ][W k,aq 2 ] = [W k+1,a ][W k−1,aq 2 ] + 1, (a ∈ C * , k ∈ N * ). This easily implies that [W k,a ] is given by the k × k determinant: [W k,a ] = [W 1,a ] 1 0 · · · 0 1 [W 1,aq 2 ] 1 . . . . . . 0 1 [W 1,aq 4 ] . . . . . . . . . . . . . . . . . . . . . and the same idea as in the proof of Proposition 2.2. The details are left to the reader. 2 12.3 So far we have only used some combinatorial and representation-theoretical techniques. Table 1 : 1Algebras A ℓ of finite cluster type. Finite-dimensional representations of quantum affine algebras. T Akasaka, M Kashiwara, Publ. RIMS. 33T. AKASAKA, M. KASHIWARA, Finite-dimensional representations of quantum affine algebras, Publ. RIMS 33 (1997), 839-867. On the decomposition numbers of the Hecke algebra of G(n,1,m). S Ariki, J. Math. Kyoto Univ. 36S. ARIKI, On the decomposition numbers of the Hecke algebra of G(n,1,m), J. Math. Kyoto Univ. 36 (1996), 789-808. Cluster algebras III: upper bounds and double Bruhat cells. A Berenstein, S Fomin, A Zelevinsky, Duke Math. J. 126A. BERENSTEIN, S. FOMIN, A. ZELEVINSKY, Cluster algebras III: upper bounds and double Bruhat cells, Duke Math. J. 126 (2005), 1-52. N Bourbaki, Groupes et algèbres de Lie, Chapitres 4,5. HermannN. BOURBAKI, Groupes et algèbres de Lie, Chapitres 4,5,6, Hermann 1968. Cluster structures for 2-Calabi-Yau categories and unipotent groups. A Buan, O Iyama, I Reiten, J Scott, Compos. Math. 145A. BUAN, O. IYAMA, I. REITEN, J. SCOTT, Cluster structures for 2-Calabi-Yau categories and unipotent groups, Compos. Math. 145 (2009), 1035-1079. Tilting theory and cluster combinatorics. A Buan, R Marsh, M Reineke, I Reiten, G Todorov, Adv. Math. 204A. BUAN, R. MARSH, M. REINEKE, I. REITEN, G. TODOROV, Tilting theory and cluster combinatorics, Adv. Math. 204 (2006), 572-618. Cluster algebras as Hall algebras of quiver representations. P Caldero, F Chapoton, Comment. Math. Helv. 81P. CALDERO, F. CHAPOTON, Cluster algebras as Hall algebras of quiver representations, Comment. Math. Helv. 81 (2006), 595-616. From triangulated categories to cluster algebras. P Caldero, B Keller, Invent. Math. 172P. CALDERO, B. KELLER, From triangulated categories to cluster algebras, Invent. Math. 172 (2008), 169-211. Minimal affinizations of representations of quantum groups: the rank 2 case. V Chari, Publ. Res. Inst. Math. Sci. 31V. CHARI, Minimal affinizations of representations of quantum groups: the rank 2 case, Publ. Res. Inst. Math. Sci. 31 (1995), 873-911. Braid group actions and tensor products. V Chari, Int. Math. Res. Not. 7V. CHARI, Braid group actions and tensor products, Int. Math. Res. Not. 2002, 7, 357-382. Reshetikhin modules. V Chari, D Hernandez, Beyond Kirillov, Contemp. Math. to appearV. CHARI, D. HERNANDEZ, Beyond Kirillov-Reshetikhin modules, Contemp. Math. (to appear). A guide to quantum groups. V Chari, A Pressley, CambridgeV. CHARI, A. PRESSLEY, A guide to quantum groups, Cambridge 1994. Quantum affine algebras. V Chari, A Pressley, Comm. Math. Phys. 142V. CHARI, A. PRESSLEY, Quantum affine algebras, Comm. Math. Phys. 142 (1991), 261-283. Minimal affinizations of representations of quantum groups: the nonsymply-laced case. V Chari, A Pressley, Lett. Math. Phys. 35V. CHARI, A. PRESSLEY, Minimal affinizations of representations of quantum groups: the nonsymply-laced case, Lett. Math. Phys. 35 (1995), 99-114. Minimal affinizations of representations of quantum groups: the simply laced case. V Chari, A Pressley, J. Algebra. 1841V. CHARI, A. PRESSLEY, Minimal affinizations of representations of quantum groups: the simply laced case, J. Algebra 184 (1996), no. 1, 1-30. Quantum affine algebras and affine Hecke algebras. V Chari, A Pressley, Pacific J. Math. 174V. CHARI, A. PRESSLEY, Quantum affine algebras and affine Hecke algebras, Pacific J. Math. 174 (1996), 295-326. Factorization of representations of quantum affine algebras. V Chari, A Pressley, Modular interfaces. Riverside CAAMS/IP Stud4V. CHARI, A. PRESSLEY, Factorization of representations of quantum affine algebras, Modular interfaces, (Riverside CA 1995), AMS/IP Stud. Adv. Math., 4 (1997), 33-40. A new interpretation of Gelfand-Tzetlin bases. I V Cherednik, Duke Math. J. 54I. V. CHEREDNIK, A new interpretation of Gelfand-Tzetlin bases, Duke Math. J. 54 (1987), 563-577. La R-matrice pour les algèbres quantiques de type affine non tordu. I Damiani, Ann. Sci. Ecole Normale Sup. 31I. DAMIANI, La R-matrice pour les algèbres quantiques de type affine non tordu, Ann. Sci. Ecole Normale Sup. 31 (1998), 493-523. Q-systems as cluster algebras II: Cartan matrix of finite type and the polynomial property. P Francesco, R Kedem, arXiv:0803.0362P. DI FRANCESCO, R. KEDEM, Q-systems as cluster algebras II: Cartan matrix of finite type and the polynomial property, arXiv:0803.0362. Combinatorics of q-characters of finite-dimensional representations of quantum affine algebras. E Frenkel, E Mukhin, Comm. Math. Phys. 216E. FRENKEL, E. MUKHIN, Combinatorics of q-characters of finite-dimensional representations of quantum affine alge- bras, Comm. Math. Phys. 216 (2001), 23-57. The q-characters of representations of quantum affine algebras, Recent developments in quantum affine algebras and related topics. E Frenkel, N Reshetikhin, Contemp. Math. 248E. FRENKEL, N. RESHETIKHIN, The q-characters of representations of quantum affine algebras, Recent developments in quantum affine algebras and related topics, Contemp. Math. 248 (1999), 163-205. Cluster algebras I: Foundations. S Fomin, A Zelevinsky, J. Amer. Math. Soc. 15S. FOMIN, A. ZELEVINSKY, Cluster algebras I: Foundations, J. Amer. Math. Soc. 15 (2002), 497-529. . S Fomin, A Zelevinsky, Y -Systems, Ann. Math. 158S. FOMIN, A. ZELEVINSKY, Y -systems and generalized associahedra, Ann. Math. 158 (2003), 977-1018. Cluster algebras II: Finite type classification. S Fomin, A Zelevinsky, Invent. Math. 154S. FOMIN, A. ZELEVINSKY, Cluster algebras II: Finite type classification, Invent. Math. 154 (2003), 63-121. S Fomin, A Zelevinsky, Cluster algebras: notes for the CDM-03 conference. Current developments in mathematics. Somerville, MAInt. PressS. FOMIN, A. ZELEVINSKY, Cluster algebras: notes for the CDM-03 conference. Current developments in mathematics, 2003, 1-34, Int. Press, Somerville, MA, 2003. Cluster algebras IV: coefficients. S Fomin, A Zelevinsky, Compos. Math. 143S. FOMIN, A. ZELEVINSKY, Cluster algebras IV: coefficients, Compos. Math. 143 (2007), 112-164. On cluster algebras with coefficients and 2-Calabi-Yau categories. C Fu, B Keller, Trans. Amer. Math. Soc. 362C. FU, B. KELLER, On cluster algebras with coefficients and 2-Calabi-Yau categories, Trans. Amer. Math. Soc. 362 (2010), 859-895. Semicanonical bases and preprojective algebras. C Geiss, B Leclerc, J Schröer, Ann. Sci. Ecole Norm. Sup. 38C. GEISS, B. LECLERC, J. SCHRÖER, Semicanonical bases and preprojective algebras, Ann. Sci. Ecole Norm. Sup., 38 (2005), 193-253. Rigid modules over preprojective algebras. C Geiss, B Leclerc, J Schröer, Invent. Math. 165C. GEISS, B. LECLERC, J. SCHRÖER, Rigid modules over preprojective algebras, Invent. Math., 165 (2006), 589-632. C Geiss, B Leclerc, J Schröer, Partial flag varieties and preprojective algebras. 58C. GEISS, B. LECLERC, J. SCHRÖER, Partial flag varieties and preprojective algebras, Ann. Inst. Fourier (Grenoble) 58 (2008), 825-876. C Geiss, B Leclerc, J Schröer, arXiv:math/0703039Cluster algebra structures and semicanonical bases for unipotent groups. C. GEISS, B. LECLERC, J. SCHRÖER, Cluster algebra structures and semicanonical bases for unipotent groups, arXiv:math/0703039. Cluster algebras and Poisson geometry. M Gekhtman, M Shapiro, A Vainshtein, Moscow Math. J. 3M. GEKHTMAN, M. SHAPIRO, A. VAINSHTEIN, Cluster algebras and Poisson geometry, Moscow Math. J. 3 (2003), 899-934. V Ginzburg, N Yu Reshetikhin, E Vasserot, Quantum groups and flag varieties. 175V. GINZBURG, N. YU. RESHETIKHIN, E. VASSEROT, Quantum groups and flag varieties, A.M.S. Contemp. Math. 175 (1994), 101-130. Langlands reciprocity for affine quantum groups of type A n. V Ginzburg, E Vasserot, Internat. Math. Res. Notices. 3V. GINZBURG, E. VASSEROT, Langlands reciprocity for affine quantum groups of type A n , Internat. Math. Res. Notices 3 (1993), 67-85. Algebraic approach to q,t-characters. D Hernandez, Adv. Math. 187D. HERNANDEZ, Algebraic approach to q,t-characters, Adv. Math. 187 (2004), 1-52. Monomials of q and q,t-characters for non simply-laced quantum affinizations. D Hernandez, Math. Z. 250D. HERNANDEZ, Monomials of q and q,t-characters for non simply-laced quantum affinizations, Math. Z. 250 (2005), 443-473. The Kirillov-Reshetikhin conjecture and solutions of T -systems. D Hernandez, J. Reine Angew. Math. 596D. HERNANDEZ, The Kirillov-Reshetikhin conjecture and solutions of T -systems, J. Reine Angew. Math. 596 (2006), 63-87. On minimal affinizations of representations of quantum groups. D Hernandez, Comm. Math. Phys. 277D. HERNANDEZ, On minimal affinizations of representations of quantum groups, Comm. Math. Phys. 277 (2007), 221- 259. R Inoue, O Iyama, A Kuniba, T Nakanishi, J Suzuki, arXiv:0812.0667Periodicities of T -systems and Y -systems. R. INOUE, O. IYAMA, A. KUNIBA, T. NAKANISHI, J. SUZUKI, Periodicities of T -systems and Y -systems, arXiv: 0812.0667. On level-zero representations of quantized affine algebras. M Kashiwara, Duke Math. J. 112M. KASHIWARA, On level-zero representations of quantized affine algebras, Duke Math. J. 112 (2002), 117-175. Representations of quantum affine algebras. D Kazhdan, Y Soibelman, Selecta Math. (N.S.). 1D. KAZHDAN, Y. SOIBELMAN, Representations of quantum affine algebras, Selecta Math. (N.S.) 1 (1995), 537-595. Q-systems as cluster algebras. R Kedem, J. Phys. A. 4119ppR. KEDEM, Q-systems as cluster algebras, J. Phys. A 41 (2008), no. 19, 194011, 14 pp. Cluster algebras and quantum affine algebras. B Keller, Oberwolfach Report. B. Leclerc5B. KELLER, Cluster algebras and quantum affine algebras, after B. Leclerc, Oberwolfach Report 5 (2008), 455-458. B Keller, arXiv:0807.1960Cluster algebras, quiver representations and triangulated categories. B. KELLER, Cluster algebras, quiver representations and triangulated categories, arXiv:0807.1960. Functional relations in solvable lattice models. I. Functional relations and representation theory. A Kuniba, T Nakanishi, J Suzuki, Internat. J. Modern Phys. A. 9A. KUNIBA, T. NAKANISHI, J. SUZUKI, Functional relations in solvable lattice models. I. Functional relations and representation theory. Internat. J. Modern Phys. A 9 (1994), 5215-5266, Imaginary vectors in the dual canonical basis of U q (n). B Leclerc, Transform. Groups. 8B. LECLERC, Imaginary vectors in the dual canonical basis of U q (n), Transform. Groups 8 (2003), 95-104. Induced representations of affine Hecke algebras and canonical bases of quantum groups, in 'Studies in memory of Issai Schur. B Leclerc, M Nazarov, J.-Y Thibon, Progress in Math. 210, BirkhäuserB. LECLERC, M. NAZAROV, J.-Y. THIBON, Induced representations of affine Hecke algebras and canonical bases of quantum groups, in 'Studies in memory of Issai Schur', Progress in Math. 210, Birkhäuser 2002. On quiver varieties. G Lusztig, Adv. Math. 136G. LUSZTIG, On quiver varieties, Adv. Math. 136 (1998), 141-182. Semicanonical bases arising from enveloping algebras. G Lusztig, Adv. Math. 151G. LUSZTIG, Semicanonical bases arising from enveloping algebras, Adv. Math. 151 (2000), 129-139. Generalized associahedra via quiver representations. R Marsh, M Reineke, A Zelevinsky, Trans. Amer. Math. Soc. 355R. MARSH, M. REINEKE, A. ZELEVINSKY, Generalized associahedra via quiver representations, Trans. Amer. Math. Soc. 355 (2003), 4171-4186. Quiver varieties and finite-dimensional representations of quantum affine algebras. H Nakajima, J. Amer. Math. Soc. 14H. NAKAJIMA, Quiver varieties and finite-dimensional representations of quantum affine algebras. J. Amer. Math. Soc. 14 (2001), 145-238. t-analogs of q-characters of Kirillov-Reshetikhin modules of quantum affine algebras. H Nakajima, Represent. Theory. 7H. NAKAJIMA, t-analogs of q-characters of Kirillov-Reshetikhin modules of quantum affine algebras, Represent. Theory 7 (2003), 259-274. Quiver varieties and t-analogs of q-characters of quantum affine algebras. H Nakajima, Ann. Math. 160H. NAKAJIMA, Quiver varieties and t-analogs of q-characters of quantum affine algebras, Ann. Math. 160 (2004), 1057-1097. t-analogs of q-characters of quantum affine algebras of type E 6. H Nakajima, arXiv:math.QA/06066377H. NAKAJIMA, t-analogs of q-characters of quantum affine algebras of type E 6 , E 7 , E 8 , arXiv:math.QA/0606637. H Nakajima, arXiv:0905.0002Quiver varieties and cluster algebras. H. NAKAJIMA, Quiver varieties and cluster algebras, arXiv:0905.0002. On Frenkel-Mukhin algorithm for q-character of quantum affine algebras. W Nakai, T Nakanishi, arXiv:0801.2239Adv. Stud. in Pure Math. To appear inW. NAKAI, T. NAKANISHI, On Frenkel-Mukhin algorithm for q-character of quantum affine algebras, To appear in Adv. Stud. in Pure Math., arXiv:0801.2239. On irreducibility of tensor products of Yangian modules associated with skew Young diagrams. M Nazarov, V Tarasov, Duke Math. J. 112M. NAZAROV, V. TARASOV, On irreducibility of tensor products of Yangian modules associated with skew Young diagrams, Duke Math. J. 112 (2002), 343-378. Grassmannians and cluster algebras. J Scott, Proc. London Math. Soc. 92J. SCOTT, Grassmannians and cluster algebras, Proc. London Math. Soc. 92 (2006), 345-380. Standard modules of quantum affine algebras. M Varagnolo, E Vasserot, Duke Math. J. 111M. VARAGNOLO, E. VASSEROT, Standard modules of quantum affine algebras, Duke Math. J. 111 (2002), 509-533. Hernandez David, CNRS et Ecole Normale Supérieure Paris DMA, 45, rue d'Ulm, 75005. [email protected], FranceDavid HERNANDEZ : CNRS et Ecole Normale Supérieure Paris DMA, 45, rue d'Ulm, 75005 Paris, France email : [email protected] . Leclerc : Bernard, Cnrs Lmno, Umr, email : [email protected] cedex. 6139Université de CaenBernard LECLERC : LMNO, CNRS UMR 6139, Université de Caen, 14032 Caen cedex, France email : [email protected]
[]
[ "High-resolution surface structure determination from bulk X-ray diffraction data", "High-resolution surface structure determination from bulk X-ray diffraction data" ]
[ "Nirman Chakraborty \nCSIR-Central Glass and Ceramic Research Institute\n196, Raja S. C. Mullick Road, JadavpurKolkata-700032India\n", "Swastik Mondal \nCSIR-Central Glass and Ceramic Research Institute\n196, Raja S. C. Mullick Road, JadavpurKolkata-700032India\n" ]
[ "CSIR-Central Glass and Ceramic Research Institute\n196, Raja S. C. Mullick Road, JadavpurKolkata-700032India", "CSIR-Central Glass and Ceramic Research Institute\n196, Raja S. C. Mullick Road, JadavpurKolkata-700032India" ]
[]
The key to most surface phenomenon lies with the surface structure. Particularly it is the charge density distribution over surface that primarily controls overall interaction of the material with external environment. It is generally accepted that surface structure cannot be deciphered from conventional bulk X-ray diffraction data. Thus, when we intend to delineate the surface structure in particular, we are technically compelled to resort to surface sensitive techniques like High Energy Surface X-ray Diffraction (HESXD), Low Energy Electron Diffraction (LEED), Scanning Transmission Electron Microscopy (STEM) and Grazing Incidence Small Angle X-ray Scattering (GISAXS). In this work, using aspherical charge density models of crystal structures in different molecular and extended solids, we show a convenient and complementary way of determining highresolution experimental surface charge density distribution from conventional bulk X-ray diffraction (XRD) data. The usefulness of our method has been validated on surface functionality of boron carbide. While certain surfaces in boron carbide show presence of substantial electron deficient centers, it is absent in others. Henceforth, a plausible correlation between the calculated surface structures and corresponding functional property has been identified.Author Contributions S. M. conceived and supervised the research project. N. C. carried out surface charge density determination with the assistance of S. M. Gas sensing experiments were carried out by N. C. S. M. is responsible for the bulk charge density models of α-glycine, αboron and boron carbide. N. C. and S. M. analyzed the experimental results and wrote the manuscript.Competing interestsThe authors declare no competing interests.
null
[ "https://arxiv.org/pdf/2205.13239v1.pdf" ]
249,097,726
2205.13239
56672366f871934a29fae60e29a5bba87adcf92d
High-resolution surface structure determination from bulk X-ray diffraction data Nirman Chakraborty CSIR-Central Glass and Ceramic Research Institute 196, Raja S. C. Mullick Road, JadavpurKolkata-700032India Swastik Mondal CSIR-Central Glass and Ceramic Research Institute 196, Raja S. C. Mullick Road, JadavpurKolkata-700032India High-resolution surface structure determination from bulk X-ray diffraction data 1 * Corresponding author: [email protected] The key to most surface phenomenon lies with the surface structure. Particularly it is the charge density distribution over surface that primarily controls overall interaction of the material with external environment. It is generally accepted that surface structure cannot be deciphered from conventional bulk X-ray diffraction data. Thus, when we intend to delineate the surface structure in particular, we are technically compelled to resort to surface sensitive techniques like High Energy Surface X-ray Diffraction (HESXD), Low Energy Electron Diffraction (LEED), Scanning Transmission Electron Microscopy (STEM) and Grazing Incidence Small Angle X-ray Scattering (GISAXS). In this work, using aspherical charge density models of crystal structures in different molecular and extended solids, we show a convenient and complementary way of determining highresolution experimental surface charge density distribution from conventional bulk X-ray diffraction (XRD) data. The usefulness of our method has been validated on surface functionality of boron carbide. While certain surfaces in boron carbide show presence of substantial electron deficient centers, it is absent in others. Henceforth, a plausible correlation between the calculated surface structures and corresponding functional property has been identified.Author Contributions S. M. conceived and supervised the research project. N. C. carried out surface charge density determination with the assistance of S. M. Gas sensing experiments were carried out by N. C. S. M. is responsible for the bulk charge density models of α-glycine, αboron and boron carbide. N. C. and S. M. analyzed the experimental results and wrote the manuscript.Competing interestsThe authors declare no competing interests. Several physico-chemical processes proceed via interaction of external factors with the material's surface [1][2][3]. For example, in gas sensing, it is the interaction of target gas molecule with material's surface which determines the result of a sensing phenomenon [4][5]. For catalysis and dye degradation, it is the mutual interaction of target molecule with the catalyst surface which determines the route in which the reaction may proceed [6][7]. Similar procedures are followed in other processes like H 2 generation by photo catalytic water splitting, solar energy harvesting, gas separation through membranes and water purification by removal of heavy elements [8][9][10]. The wide applicability of surface mediated phenomena makes the role of material's surface very important and decisive, preparing scope for detailed analysis of surface and surface structures in functional materials [1][2][3][4][5]. Understanding surface structure has remained a major concern for scientists since decades. Consequently, there have been several definitions of surfaces of solids. Surface has been defined as a particularly simple type of interface at which the solid is in contact with the surrounding world, i.e. the atmosphere or in ideal case, the vacuum [11]. In crystallographic terms, surface is an entity which differs from the bulk by the fact that the periodic unit cell stacking is finite in one of the three dimensions [12]. This allows the near-surface unit cells to modify their geometric arrangements to give rise to a specific surface structure [12]. However, while the surface is an inevitable entity in every material system, till date the tools to elucidate surface charge density are very limited. While X-ray diffraction remains as the most popular and effective tool for identifying and analyzing bulk crystal structures [13][14][15][16][17], the processes involved for determining surface structures have been perceived to be complex and has been both considered and executed separately from bulk XRD studies [18]. Gustafon et al. devised a method of analyzing surfaces by high-energy surface X-ray diffraction (HESXD) using photons of energy 85 keV [19]. Several other works on surface structure analysis have been conducted by the methods of Low Energy Electron Diffraction (LEED) and diffused LEED (DLEED) [20][21]. However, technical problems with all the methods mentioned above except X-ray diffraction involve special and elaborate experimental facilities with considerable expertise in data interpretation as well [20][21]. While the experiments are too sophisticated and unavailable to most researchers, the complication with their execution adds significantly to above difficulties. A recent development in this respect has been introduction of real space imaging technique to map local charge density of crystalline materials in four dimensions with sub-angstrom resolution for imperfect structures like interfaces, boundaries and defects [22]. However, the drawbacks of the technique lie in complexity and availability of experimental set-up which involves scanning transmission electron microscopy (STEM) alongside an angle-resolved pixellated fast-electron detector [22]. Grazing Incidence Small Angle X-ray Scattering (GISAXS) is another technique to estimate surface structures [23][24]. However, the utility of this method has been till date confined to thin films and nanostructures only. This calls for need to develop a convenient complementary method by which elucidation of surface structure (charge density distribution) with higher precision is feasible. The charge density at any point is determined by the contribution of charge densities of surrounding atoms/sources [25][26]. Using the concept of Green's function, the contributions from different atomic basins towards a particular point can be calculated based on the formula [25][26] ( ) = ∫ −∇ 2 ( ′ ) 4 | − ′ | ′ ………… (1) Where ( ) is the charge density at position r due to densities at other points indicated by ( ′ ). As evident from equation (1), if the contributions of atoms above a surface towards a point on the surface are not included, then the charge density calculated at that point should be different than that in bulk. This indicates that when surface truncation occurs, due to above explanation, the charge density distribution in the truncated surface will be different than that of bulk [18,27]. Thus if contribution of charge density sources above a plane (surface) could be excluded and rearrangement of charge density due to that exclusion could be made, then in-principle, surface charge density can be obtained from bulk charge density. In an X-ray diffraction experiment from a crystal, the diffracted intensity is the time averaged scattering intensity of all electrons in the crystal [28]. In principle one can thus obtain time averaged charge density distributions in bulk and in turn on a plane. If this plane is considered to be the surface, then technically, after removing the charge density contributions from the atoms above the plane, surface structure can be determined from bulk X-ray diffraction. However, this was never realized, as in conventional Independent Atom Model (IAM), the atoms are considered as discrete and independent of one another; neglecting deformation of charge densities due to atom-atom interactions [29]. As a result there cannot be any difference between the charge density distribution in bulk and surface in IAM. During the 2 nd half of last century, the development of methods for aspherical charge density analysis changed this picture where aspherical charge density models such as multipole (MP) formalisms have been introduced [30][31][32]. One such formalism by Hansen and Coppens has been expressed as [30]: ρ at (r)=P c ρ core (r) + P v k 3 ρ valence (kr) + ∑ ′ 3 =0 ( ′ ) ∑ ± =0 ± ( , )……(2) where the density of each atom (ρ at ) is divided into core, valence and deformation parts. The parameters P c and P v in first two terms of the equation denote the core shell and valence shell population coefficients [30]. The third term stands for the aspherical features of valence charge density, introducing the idea of deformation of charge density due to chemical bonding and other inter-atomic interactions [30]. The concept of bonding between atoms via charge transfer, otherwise ignored by IAM was thus included in such aspherical charge density formalisms [30]. Hence, using aspherical charge density models, the charge density contributions of an atom on other atoms could be calculated and excluded, making the formalism relevant to determining surface of a material [30]. The present work devises a simple method by which experimental surface charge distribution of materials can be determined from conventional bulk XRD data, employing concepts of aspherical charge density analysis [30][31][32]. An idea has been introduced as how to utilize information obtained from above studies in correlating surface structures with associated surface properties. Charge density models based on low temperature, high-resolution single-crystal X-ray diffraction data of molecular solid α-glycine and extended solids α-boron, boron carbide and calcium mono-silicide (CaSi) were taken as the test cases for this study (see Supplementary Information for crystal structures) [33][34][35][36][37]. Accurate multipole (MP) models have been derived for all samples using high-resolution single crystal X-ray diffraction data from bulk crystals [33][34][35][36][37]. For calculating the surface charge density of a particular solid, three atoms in the solid were chosen in such a way that they constitute a triangle (Fig. 1). Then the geometrical centre of the triangle was identified. Depending on the values of unit cell lengths and keeping the centre of the analyses at the above geometrical centre, an xyz (x: length extending along x axis, y: length extending along y axis and z: length extending along z axis) block of the crystal was considered. Then with respect to the plane defined as midway of a particular crystallographic axis, the whole block was considered to be composed of two segments: one segment consisting of the plane and the region below it; other is the whole block with the plane midway ( Fig. 1).Topological properties [38] like critical points on a definite plane were calculated using the XDPROP module of XD2006 software [39]; first with the segment consisting of the plane (surface) and the region below it, and secondly for the whole block (bulk) (Fig. 2). Then, the difference in values of charge density () in e/Å 3 between both the above cases were calculated ( Table 1). The above calculations were conducted both considering atom-atom interactions using the multipole (MP) formalism and avoiding atom-atom interaction using IAM [37]. Final charge density plots were generated using XDGRAPH module of XD2006 software [39] to visualize the surface from a perspective of difference in charge density plots ( Fig. 3-4 and Supplementary Information). Charge density studies for boron carbide [36,40] (4)) forming a triangular pattern reveal difference in charge density values between the two segments, therefore elucidating charge density distribution patterns in a surface containing C(4), X1_B(3) and X5_C(4) atoms (Fig. 3). For the C(4)-B(3) bond, a difference in charge density value of 0.001 e/Å 3 was found at the bond critical point (Table 1) [38]. Whereas for bond between X5_C(4) and B(3) atoms; the difference is 0.033 e/Å 3 at the corresponding bond critical point (Table 1). For other bonds on the studied plane like those between X1_C(4), X1_B(3); X1_C(4), B(3), the differences in charge density values were calculated to be 0.002 e/Å 3 (Table 1) [22]. And, for the plane drawn using two polar and one equatorial boron atoms (X4_B(2)-X5_B(2)-X6_B (2)) [36], the difference in charge density values at critical points has been calculated to be negligible (Table 1 and Supplementary Information). Figure 3(a-d) demonstrates the charge density and deformation density maps of plane drawn using C(4), X1_B(3) and X5_C(4) atoms in the boron carbide system with difference in charge density marked as red contour lines at the surface. conducted on a plane drawn by central B(3) atom of a C(4)-B(3)-C(4) chain and two carbon atoms from a neighbouring chain (C(4)-X1_B(3)-X5_C Analysis on a plane drawn using (X3_B(1), B(1), X2_B(1)) atoms using multipole (MP) formalism for intra-icosahedral interactions of α-boron [35] reveals that between the atoms X2_B(1), X3_B(1) and B(1), there exists a ring critical point [38] (Table 1). Difference calculation between charge density values of two segments has been found to be 0.014 e/Å 3 . In between atoms X3_B(1), B(1); X2_B(1), B(1) and X2_B(1), X3_B(1) there exists a bond critical point [38] with difference charge density of 0.016 e/Å 3 . However for the bond between two asymmetrical boron atoms B(1) and B(2), the difference is 0.004 e/Å 3 (Table 1). For bond between X5_B(2), B(2); the difference is 0.005 e/Å 3 . For atoms X5_B(2), B(2), B(1); X5_B(2), X2_B(1), B(1), there exists ring critical points [22] with difference charge density 0.002 and 0.007 e/Å 3 respectively (Table 1). For inter-icosahedral interactions [35] like between X3_B(2), B(2), bond critical point exists with difference in charge density value 0.006 e/Å 3 (Table 1). However for ring critical points lying between X5_B(2), X3_B(2), B(2); X3_B(2), X4_B(2), X5_B(2), B(2); the difference exists as 0.003 e/Å 3 . Analysis on (X4_B(1)-X6-B(1)-X3_B (1)) plane reveals almost no difference in charge density except for the ring critical point formed by X6_B(1), X4_B(1) , X3_B(1) atoms with value 0.003 e/Å 3 ( Table 1). Analyses of (X4_B(1)-X6-B(1)-X3_B(1)) plane revealed that while for the ring critical point between atoms X6_B(1)-X4_B(1)-X3_B(1), there exists a difference in charge density value by 0.003 e/Å 3 , that for the bond X6_B(1)-X4_B(1) is zero (Table 1). Figure 3 (e-h) demonstrates the charge density and deformation density maps of specific planes in the α-boron system drawn using X3_B(1), B(1) and X2_B(1) atoms with difference in charge density marked as red contour lines at the surface. Similar charge density maps for plane drawn using X4_B(1), X6-B(1) and X3_B(1) atoms are demonstrated in Figure 4 (a-d). Similar studies on CaSi were conducted on a plane defined by three atoms [37], namely Ca, X9_Si and X11_Si. For the X9_Si-X11_Si bond, the bond critical point shows a difference in charge density of magnitude 0.005 e/Å 3 (Table 1). While for the Ca-X11_Si bond the difference stands out to be 0.006 e/Å 3 ; for Ca-X9_Si, the difference is 0.008 e/Å 3 and for Si-X2_Si bond, the difference is 0.010e/Å 3 (Table 1). Ring critical point for Ca, X9_Si and X11_Si atoms records a charge density difference of 0.003 e/Å 3 (Table 1). However for the Ca-Si bond, the difference stands out to be zero (upto 3 decimal places). Similar calculations were performed for α-glycine molecule [33] as well. But, for all critical points, the difference in charge density was zero (upto 3 decimal places) (Table 1). However, slight differences between surface and bulk can be seen in the charge density and deformation density maps (Fig. 4). All the above calculations for α-glycine, α-boron, boron carbide and calcium mono-silicide (CaSi) were repeated in IAM mode [38] but in all cases, expectedly no change in charge density values were observed ( Table 2). This indicates that for molecular materials, surface truncation effects are least. However, significant differences might be expected for molecular solids where extended network is formed with the aid of various intermolecular interactions, such as hydrogen bonds. The charge density maps highlighting differences are provided in Fig. 4 (e-h) and the Supplementary Information. The above findings find their interpretation as sources of charge density on a plane when a crystal in continuum encounters a truncation and therefore forms an interface between the bulk crystal and external environment or vacuum that we define as surface [41][42]. As evident from the values of difference in charge density on a plane, the surface in particular has its own charge density picture that involves variable charge densities at different regions of the surface. Since like in bulk, the surface consists of features like bonds, rings, etc. [38], the difference in charge density at the critical points in all these topological features actually contribute in defining the properties of that particular surface [30,38]. Like for the (C(4)-X1_B(3)-X5_C(4)) plane in boron carbide, charge densities calculated at bond critical points for C(4)-B(3) bond and X5_C(4)-B(3) bond for contribution from atoms below the plane is smaller than what is calculated for contribution from atoms both above and below the plane (Table 1, Fig. 3). The value is 1.553 e/Å 3 for C(4)-B(3) bond for atoms below and it is 1.555e/Å 3 for atoms both above and below the plane. For X5_C(4)-B(3) bond, the values are 1.522 e/Å 3 and 1.555 e/Å 3 respectively. This suggests that the surface defined by above plane bears a deficiency of electrons (Table 1, Fig. 3). In order to check the practical applicability of surface charge density using our method, we have performed chemiresistive gas sensing with boron carbide at high temperatures (see Supplementary Information). The sample showed a steady p-type response to ammonia (Fig. 5). Hence when a reducing gas like ammonia interacts with the chemisorbed oxygen layer on boron carbide surface, there is a transfer of electrons back into the boron carbide system [14,43]. As can be seen from figure 3, the surface containing (C(4)-X1_B(3)-X5-C(4)) plane is substantially more electron deficient than the bulk. Thus the ammonia molecules shall have greater chance of donating electrons to the system via mentioned surface. This leads to a rise in electrical resistance of the typical p-type boron carbide system, making the material sense reducing gas like ammonia (Fig. 5) [14,43]. Moreover it is evident that while some surfaces under consideration posses these differences in charge densities, several others don't posses that ( Table 1, Fig. 4 and Supplementary Information). This can be anticipated to be one of the reasons why certain surfaces in materials are found to be experimentally active for certain external phenomenon while some are not (Fig. 5) [44]. This also bears the cause behind differential behavior of different material planes under external factors. Observance of these differences on considering the multipole atom model categorically substantiates the necessity of considering atom-atom interactions in order to define an entity on some surface (Table 1 and 2). The above discussion explains why surface behaves differently than the bulk, as evident from numerous surface dependent phenomena like gas sensing, catalysis, etc. [1][2][3][4][5][6][7][8][9][10]. Since the contribution of surrounding atoms depend on the crystal structures, chemical interaction/bonding and atomic planes, it is obvious that the difference between bulk and surface will be different depending on which plane has been truncated ( Table 1). For compounds with heavy element, modelling the outer core electrons besides valence electrons will provide more accurate picture of charge density distribution [45]. This mechanism of elucidating surface structure from bulk crystal structure will not only open the possibility to determine surface structure from bulk crystal structure but also enable researchers to study surface using the popular technique of XRD. This technique avoids involvement of sophisticated experimental tools specialized for surface analyses and ensures charge density estimation accurately upto two decimal places in e/Å 3 , yet unachieved by methods adopted till date. The generality of this method ensures that this method will be applicable to theoretically calculated charge densities or invariom model densities [46]. As the surface in this work has been defined as ideally truncated plane with respect to vacuum (Fig. 1), the surface charge density picture may vary based on post truncation internal and external interactions. Including charge density contributions of the atoms of surface layers towards the atoms that were above those layers before truncation from the bulk, back to the surface layer atoms by source function calculation and using other complementary methods could yield more accurate picture of surface structure [25][26]. Further developments using dynamic charge density [33,47] and source function calculations [26] for extended solids shall provide more information and enable evolution of greater insights into surface structure determination from bulk X-ray diffraction data. Conclusion In this work, using the concept of aspherical charge density models of bulk crystal structures in different molecular and extended solids, we have shown a convenient way of determining highresolution surface structures both in qualitative and quantitative manner from bulk X-ray diffraction data. Differences in charge density at different critical points on the planes in different molecular and extended solids were calculated, based on presence and absence of atoms above the plane. While for molecular solids like α-glycine no such detectable difference was observed, for extended solids like α-boron, boron carbide and CaSi, significant differences in charge density distributions between bulk and surface were found. This was observed only when atom-atom interactions were considered while calculating charge densities at different parts of the plane, highlighting importance of considering inter-atomic interactions for defining a surface. While for some planes this difference is significant, it is negligible for others. This indicates a plausible reason for different behavior of one surface in a material than the other. The magnitude and nature of charge densities calculated are expected to delineate the surface structure which can be anticipated to explain significant surface phenomena. Tables and figures Gas sensing experiments For developing the gas sensor, 0.1 g of boron carbide powder was thoroughly mixed in isopropyl alcohol into a consistent slurry and drop-casted on a flat alumina substrate with interdigitated electrodes made using screen printing technique. The coating was allowed to dry at 60°C for 6 hours and then connections were made using platinum wires. The whole sensor along with the wires was placed in a tube furnace and the sensor resistance was measured by a 7 1 2 ⁄ digit multimeter (Keysight 34470A) interfaced with the Keysight GUI software. For sensing purposes, standard calibrated cylinders of ammonia in various ppm concentrations were used. They were connected using a Mass Flow Controller (MFC) with gas flow regulated at 50 sccm/min. Sensing experiments were conducted at different temperatures with NH 3 gas flow by operating the tube furnace. Optimum sensing response was recorded at 600°C for 10 ppm ammonia gas in air medium. α-glycine (C(1)-C(2)-O(2)) bonds/rings ρ base (e/Å 3 ) ρ base+top (e/Å 3 ) Δρ(e/Å 3 ) boron carbide(C(4)-X1_B(3)-X5-C(4)) Fig. 1 : 1Scheme for drawing planes with atoms above and below the plane (a) boron carbide [(C(4)-X1_B(3)-X5-C(4)) corresponding to (0 1 0) plane] (b) α boron [(X3_B(1)-B(1)-X2_B(1)) corresponding to (1 0 0) plane] (c) CaSi (X9_Si(1)-Ca(1)-X11_Si(1)) corresponding to (-1 2 0) plane] and (d) α-glycine [(C(1)-C(2)-O(2)) corresponding to (1 -1 3) plane]. Fig. 2 : 2Schematic diagram delineating methodology of defining surface. (a) A cut introduced through the bulk. (b) Removal of the region above the cut. (c) The plane (0 1 0) exposed is now defined as surface. Fig. 3 : 3Comparison of (a-b, e-f) charge density (contours at 0.1upto2.5 electron Å -3 ) and (c-d, gh) deformation density maps (contours at 0.05 upto 1.0 electron Å -3 ) in boron carbide (C(4)-X1_B(3)-X5-C(4)) [(01 0) plane] and α-boron (X3_B(1)-B(1)-X2_B(1)) [(0 0 -1) plane] respectively. (a, c) Plane containing B(1), B(3) and C(4) atoms representing surface (b, d) bulk. Plane containing B(1) atoms representing (e, g) surface (f, h) bulk. The contour lines on surface different from bulk are separately shown in red color. Solid contour lines are for positive values, dashed lines for negative values, and dotted lines for zero contour. Fig. 4 : 4Comparison of (a-b, e-f) charge density (contours at 0.1 upto 2.5 electron Å -3 ) and (c-d, g-h) deformation density maps (contours at 0.05 upto 1.0 electron Å -3 ) in α-boron (X4_B(1)-X6-B(1)-X3_B(1))[(1 0 0) plane] and α-glycine (C(1)-C(2)-O(2))[(1 -1 3) plane] respectively. (a, c) Plane containing B(1), B(2) atoms representing surface (b, d) bulk. Plane containing C(1), C(2), O(2) atoms representing (e, g) surface (f, h) bulk. The contour lines on surface different from bulk are separately shown in red color. Solid contour lines are for positive values, dashed lines for negative values, and dotted lines for zero contour. Fig. 5 : 5Dynamic sensing characteristics of boron carbide sensor to 10 ppm NH 3 at an elevated temperature of 600°C. Fig. S1 : S1(a) Powder XRD pattern (b-d) Surface morphology and (e) Surface elemental composition using Energy Dispersive X-ray (EDX) spectroscopy of boron carbide powder used for gas sensing application. Fig. S2 : S2Crystal structures of (a) boron carbide (b) α-boron (c) calcium monosilicide and (d) αglycine respectively, identifying the different atoms in crystal. Fig. S3 : S3Comparison of (a-b, e-f) electron density (contours at 0.1 upto 2.5 electron Å -3 ) and (c-d, g-h) deformation density maps (contours at 0.05 upto 1.0 electron Å -3 ) in boron carbide (X4_B(2)-X5_B(2)-X6_B(2)) [(-1 1 4) plane] and CaSi (X9_Si(1)-Ca(1)-X11_Si(1)) [(bulk. The contour lines on surface different from bulk are separately shown in red color. Solid contour lines are for positive values, dashed lines for negative values, and dotted lines for zero contours. Fig. S4 : S4Comparison of (a-b) Laplacian of charge density maps [contours at ±(2, 4, 8)×10 n electron Å -5 (−3≤n≤3)] in boron carbide (C(4)-X1_B(3)-X5-C(4)). (a) Plane containing B(1), B(3) and C(4) atoms representing surface (b) bulk. The contour lines on surface different from bulk are separately shown in red color. Solid contour lines are for positive values, dashed lines for negative values. Fig. S5 : S5High temperature ammonia sensing set up for studying surface properties of boron carbide in presence of ammonia gas. Fig. S6 : S6(a) Bright field Transmission Electron Microscopy (TEM) (b) Selected Area Electron Diffraction (SAED) pattern (c, d) High resolution TEM images of boron carbide powder used for sensing application. Table 1 : 1Comparison of charge density values at different bond critical points for α-glycine, α boron, boron carbide and CaSi using multipole (MP) formalism. Table 2 : 2Comparison of charge density values at different bond critical points for α-glycine, α boron, boron carbide and CaSi using independent atom model (IAM). Supplementary InformationHigh-resolution surface structure determination from bulk X-ray diffraction data Chemical reactions on surfaces for applications in catalysis, gas sensing, adsorption-assisted desalination and Li-ion batteries: opportunities and challenges for surface science. D W Boukhvalov, V Paolucci, G D&apos;olimpio, C Cantalinic, A Politano, Phys. Chem. Chem. Phys. 23Boukhvalov, D. W., Paolucci, V., D'Olimpio, G., Cantalinic, C. & Politano, A. Chemical reactions on surfaces for applications in catalysis, gas sensing, adsorption-assisted desalination and Li-ion batteries: opportunities and challenges for surface science. Phys. Chem. Chem. Phys. 23, 7541-7552 (2021). Materials Design and Discovery Group A route to high surface area, porosity and inclusion of large molecules in crystals. H Chae, D Siberio-Pérez, J Kim, Y Go, M Eddaoudi, A J Matzger, M O&apos;keeffe, O M Yaghi, Nature. 427Chae, H., Siberio-Pérez, D., Kim, J., Go, Y., Eddaoudi, M., Matzger, A. J., O'Keeffe, M., Yaghi, O. M. & Materials Design and Discovery Group A route to high surface area, porosity and inclusion of large molecules in crystals. Nature 427, 523-527 (2004). Gas sensors based on CeO 2 nanoparticles prepared by chemical precipitation method and their temperature-dependent selectivity towards H 2 S and NO 2 gases. D N Oosthuizen, D E Motaung, H C Swart, Applied Surface Science. 505144356Oosthuizen, D. N., Motaung, D. E. & Swart, H. C. Gas sensors based on CeO 2 nanoparticles prepared by chemical precipitation method and their temperature-dependent selectivity towards H 2 S and NO 2 gases. Applied Surface Science 505, 1, 144356 (2020). Dopant Mediated Surface Charge Imbalance in Enhancing Performance of Metal Oxide Chemiresistive Gas Sensors. N Chakraborty, S Mondal, J. Mater. Chem. C. 10Chakraborty, N. & Mondal, S. Dopant Mediated Surface Charge Imbalance in Enhancing Performance of Metal Oxide Chemiresistive Gas Sensors. J. Mater. Chem. C 10, 1968-1976 (2022). Surface studies of gas sensing metal oxides. M Batzill, U Diebold, Phys. Chem. Chem. Phys. 9Batzill, M. & Diebold, U. Surface studies of gas sensing metal oxides. Phys. Chem. Chem. Phys. 9, 2307-2318 (2007). Kinetics of photocatalytic degradation of reactive dyes in a TiO 2 slurry reactor. T Sauer, G Cesconeto Neto, H J José, R F P Moreira, Journal of Photochemistry and Photobiology A: Chemistry. 1491-3Sauer, T., Cesconeto Neto, G., José, H. J. & Moreira, R. F. P. M. Kinetics of photocatalytic degradation of reactive dyes in a TiO 2 slurry reactor. Journal of Photochemistry and Photobiology A: Chemistry 149(1-3), 147-154 (2002). Simple physical mixing of zeolite prevents sulfur deactivation of vanadia catalysts for NOx removal. I Song, H Lee, S W Jeon, I A M Ibrahim, J Kim, Y Byun, D J Koh, J W Han, D H Kim, Nat. Commun. 12901Song, I., Lee, H., Jeon, S. W., Ibrahim, I. A. M., Kim, J., Byun, Y., Koh, D. J., Han, J. W. & Kim, D. H. Simple physical mixing of zeolite prevents sulfur deactivation of vanadia catalysts for NOx removal. Nat. Commun. 12, 901 (2021). Gold Nanoparticles: Assembly, Supramolecular Chemistry, Quantum-Size-Related Properties, and Applications toward Biology, Catalysis, and Nanotechnology. M C Daniel, D &amp;astruc, Chem. Rev. 104346Daniel, M. C. &Astruc, D. Gold Nanoparticles: Assembly, Supramolecular Chemistry, Quantum-Size-Related Properties, and Applications toward Biology, Catalysis, and Nanotechnology. Chem. Rev. 104, 293−346 (2004). Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite. S Y Leblebici, L Leppert, Y Li, S E Reyes-Lillo, S Wickenburg, E Wong, J Lee, M Melli, D Ziegler, D K Angell, D Frank Ogletree, P D Ashby, F M Toma, J B Neaton, I D Sharp, A Weber-Bargioni, Nat Energy. 116093Leblebici, S. Y., Leppert, L., Li, Y., Reyes-Lillo, S. E., Wickenburg, S., Wong, E., Lee, J., Melli, M., Ziegler, D., Angell, D. K., Frank Ogletree, D., Ashby, P. D., Toma, F. M., Neaton, J. B., Sharp, I. D. & Weber-Bargioni, A. Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite. Nat Energy 1, 16093 (2016). Porous boron nitride nanosheets for effective water cleaning. W Lei, D Portehault, D Liu, S Qin, Y Chen, Nat. Commun. 41777Lei, W., Portehault, D., Liu, D., Qin, S. & Chen, Y. Porous boron nitride nanosheets for effective water cleaning. Nat. Commun. 4, 1777 (2013). Surface and Interface Physics: Its Definition and Importance. H Lüth, Solid Surfaces, Interfaces and Thin Films. Advanced Texts in Physics. Berlin, HeidelbergSpringerLüth, H. (2001) Surface and Interface Physics: Its Definition and Importance. In: Solid Surfaces, Interfaces and Thin Films. Advanced Texts in Physics. Springer, Berlin, Heidelberg Surface Characterization: A User's Sourcebook Edited by Brune. D Hellborg, R Whitlow, H J Hunderi, O Wiley-Vch, Verlag, Surface Characterization: A User's Sourcebook Edited by Brune, D., Hellborg, R., Whitlow, H. J. & Hunderi, O. WILEY-VCH Verlag. Methods of Surface Structure Determination. D Woodruff, 10.1017/CBO9781139149716.006Modern Techniques of Surface Science. CambridgeCambridge University PressWoodruff, D. (2016). Methods of Surface Structure Determination. In Modern Techniques of Surface Science (pp. 98-214). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139149716.006 Ammonia Sensing by Sn 1−x V x O 2 Mesoporous Nanoparticles. N Chakraborty, A Sanyal, S Das, D Saha, S K Medda, S Mondal, ACS Appl. Nano Mater. 3Chakraborty, N., Sanyal, A., Das, S., Saha, D., Medda, S. K. & Mondal, S. Ammonia Sensing by Sn 1−x V x O 2 Mesoporous Nanoparticles. ACS Appl. Nano Mater. 3, 8, 7572-7579 (2020). Surface structure determination by X-ray diffraction. R Feidenhans&apos;l, Surface Science Reports. 103Feidenhans'l, R. Surface structure determination by X-ray diffraction, Surface Science Reports, 10, 3, 105-188 (1989). Crystal Structure Determination from Powder Diffraction Data. K D M Harris, M Tremayne, Chem. Mater. 8Harris, K. D. M. & Tremayne, M. Crystal Structure Determination from Powder Diffraction Data. Chem. Mater. 8, 11, 2554-2570 (1996). Contemporary Advances in the Use of Powder X-Ray Diffraction for Structure Determination. K D M Harris, M Tremayne, B M Kariuki, Angew. Chem. Int. Ed. 40Harris, K. D. M., Tremayne, M. & Kariuki, B. M. Contemporary Advances in the Use of Powder X-Ray Diffraction for Structure Determination. Angew. Chem. Int. Ed. 40, 1626-1651 (2001). . B E Warren, X-Ray Diffraction, Addison-WesleyReading, MAWarren, B. E. X-Ray Diffraction (Addison-Wesley, Reading, MA, 1969). High-Energy Surface X-ray Diffraction for Fast Surface Structure Determination. J Gustafson, M Shipilin, C Zhang, A Stierle, U Hejral, U Ruett, O Gutowski, P A Carlsson, M Skoglundh, E Lundgren, Science. 343Gustafson, J., Shipilin, M., Zhang, C., Stierle, A., Hejral, U., Ruett, U., Gutowski, O., Carlsson, P. A., Skoglundh, M. & Lundgren, E. High-Energy Surface X-ray Diffraction for Fast Surface Structure Determination. Science 343, 758-761 (2014). Tensor LEED: A Technique for High-Speed Surface-Structure Determination. P J Rous, J B Pendry, D K Saldin, PHYSICAL REVIEW LETTERS. 5723Rous, P. J., Pendry, J. B. & Saldin, D. K. Tensor LEED: A Technique for High-Speed Surface-Structure Determination. PHYSICAL REVIEW LETTERS 57(23), 2951-2954 (1986). Diffuse LEED and Surface Crystallography. K Heinz, D K Saldin, J B Pendry, PHYSICAL REVIEW LETTERS. 5521Heinz, K., Saldin, D. K. & Pendry, J. B. Diffuse LEED and Surface Crystallography. PHYSICAL REVIEW LETTERS 55(21), 2312-2315 (1985). Real-space chargedensity imaging with sub-ångström resolution by four-dimensional electron microscopy. W Gao, C Addiego, H Wang, X Yan, Y Hou, D Ji, C Heikes, Y Zhang, L Li, H Huyan, T Blum, T Aoki, Y Nie, D G Schlom, R Wu, X Pan, Nature. 575Gao, W., Addiego, C., Wang, H., Yan, X., Hou, Y., Ji, D., Heikes, C., Zhang, Y., Li, L., Huyan, H., Blum, T., Aoki, T., Nie, Y., Schlom, D. G., Wu, R. & Pan, X. Real-space charge- density imaging with sub-ångström resolution by four-dimensional electron microscopy. Nature 575, 480-484 (2019). Probing surface and interface morphology with Grazing Incidence Small Angle X-Ray Scattering. G Renaud, R Lazzari, F Leroy, Surface Science Reports. 64Renaud, G., Lazzari, R. & Leroy, F. Probing surface and interface morphology with Grazing Incidence Small Angle X-Ray Scattering. Surface Science Reports 64, 255-380 (2009). Grazing incidence small angle x-ray scattering from free-standing nanostructures. M Rauscher, R Paniago, H Metzger, Z Kovats, J Domke, J Peisl, H D Pfannes, J Schulze, I Eisele, J. Appl. Phys. 86Rauscher, M., Paniago, R., Metzger, H., Kovats, Z., Domke, J., Peisl, J., Pfannes, HD., Schulze, J. and Eisele, I. Grazing incidence small angle x-ray scattering from free-standing nanostructures. J. Appl. Phys. 86, 12, 6763-6769 (1999). A Green's function for the density. R F W Bader, C Gatti, Chemical Physics Letters. 287Bader, R. F. W. & Gatti, C. A Green's function for the density. Chemical Physics Letters 287, 233-238 (1998). Source Function applied to experimental densities reveals subtle electron-delocalization effects and appraises their transferability properties in crystals. C Gatti, G Saleh, L L Presti, Acta Cryst. B. 72Gatti, C., Saleh, G. & Presti, L. L. Source Function applied to experimental densities reveals subtle electron-delocalization effects and appraises their transferability properties in crystals. Acta Cryst. B 72, 180-193 (2016). Surface and bulk structural properties of single-crystalline Sr 3 Ru 2 O 7. B Hu, G T Mccandless, M Menard, V B Nascimento, J Y Chan, E W Plummer, R Jin, PHYSICAL REVIEW B. 81184104Hu, B., McCandless, G. T., Menard, M., Nascimento, V. B., Chan, J. Y., Plummer, E. W. & Jin, R. Surface and bulk structural properties of single-crystalline Sr 3 Ru 2 O 7 . PHYSICAL REVIEW B 81, 184104 (2010). Elements of X-ray Diffraction. B D Cullity, S R Stock, Addison-Wesley Publishing Company, IncCullity, B. D. & Stock, S. R. Elements of X-ray Diffraction, Addison-Wesley Publishing Company, Inc. (1956). How to easily replace the independent atom model-the example of bergenin, a potential anti-HIV agent of traditional Asian medicine. B Dittrich, M Weber, R Kalinowski, S Grabowsky, C B Hubschlea, P Luger, Acta Cryst. B. 65Dittrich, B., Weber, M., Kalinowski, R., Grabowsky, S., Hubschlea, C. B. & Luger, P. How to easily replace the independent atom model-the example of bergenin, a potential anti-HIV agent of traditional Asian medicine. Acta Cryst. B 65, 749-756 (2009). X-ray Charge Densities and Chemical Bonding. P Coppens, Oxford University PressNew YorkCoppens, P. (1997). X-ray Charge Densities and Chemical Bonding. New York: Oxford University Press. The invariom model and its application: refinement of D,L-serine at different temperatures and resolution. B Dittrich, C B Hubschle, M Messerschmidt, R Kalinowski, D Girnt, P Luger, Acta Cryst. A. 61Dittrich, B., Hubschle, C. B., Messerschmidt, M., Kalinowski, R., Girnt, D. and Luger, P. The invariom model and its application: refinement of D,L-serine at different temperatures and resolution. Acta Cryst. A 61, 314-320 (2005). VALRAY Users Manual. Department of Chemistry. R F Stewart, M A Spackman, Pittsburgh, USACarnegie-Mellon UniversityStewart, R. F. & Spackman, M. A. (1983). VALRAY Users Manual. Department of Chemistry, Carnegie-Mellon University, Pittsburgh, USA. Experimental dynamic electron densities of multipole models at different temperatures. S Mondal, S J Prathapa, S Van Smaalen, Acta Cryst. A. 68Mondal, S., Prathapa, S. J. & van Smaalen, S. Experimental dynamic electron densities of multipole models at different temperatures. Acta Cryst. A 68, 568-581 (2012). Experimental Charge Density of α-Glycine at 23 K. R Destro, P Roversi, M Barzaghi, R E Marsh, J. Phys. Chem. A. 104Destro, R., Roversi, P., Barzaghi, M.& Marsh, R. E. Experimental Charge Density of α- Glycine at 23 K. J. Phys. Chem. A, 104, 1047-1054 (2000). Experimental evidence of orbital order in α-B12 and γ-B28 polymorphs of elemental boron. S Mondal, S Van Smaalen, G Parakhonskiy, S Jagannatha Prathapa, L Noohinejad, E Bykova, N &amp;dubrovinskaia, PHYSICAL REVIEW B. 8824118Mondal, S., van Smaalen, S., Parakhonskiy, G., Jagannatha Prathapa, S., Noohinejad, L., Bykova, E. &Dubrovinskaia, N. Experimental evidence of orbital order in α-B12 and γ-B28 polymorphs of elemental boron. PHYSICAL REVIEW B 88, 024118 (2013). Disorder and defects are not intrinsic to boron carbide. S Mondal, E Bykova, S Dey, S Imran Ali, N Dubrovinskaia, L Dubrovinsky, G Parakhonskiy, S Van Smaalen, Sci. Rep. 19330Mondal, S., Bykova, E., Dey, S., Imran Ali, S., Dubrovinskaia, N., Dubrovinsky, L., Parakhonskiy, G. & van Smaalen, S. Disorder and defects are not intrinsic to boron carbide. Sci. Rep. 19330 (2016). Probing the Zintl-Klemm Concept: A Combined Experimental and Theoretical Charge Density Study of the Zintl Phase CaSi. I M Kurylyshyn, T F Fassler, A Fischer, C Hauf, G Eickerling, M Presnitz, W Scherer, Angew. Chem. Int. Ed. 53Kurylyshyn, I. M., Fassler, T. F., Fischer, A., Hauf, C., Eickerling, G., Presnitz, M.& Scherer, W. Probing the Zintl-Klemm Concept: A Combined Experimental and Theoretical Charge Density Study of the Zintl Phase CaSi. Angew. Chem. Int. Ed. 53, 3029 -3032 (2014). Atoms in Molecules -a Quantum Theory. R F W Bader, Oxford University PressNew YorkBader, R. F. W. (1990). Atoms in Molecules -a Quantum Theory. New York: Oxford University Press. XD2006, A Computer Program Package for Multipole Refinement, Topological Analysis of Charge Densities and Evaluation of Intermolecular Energies from Experimental or Theoretical Structure Factors. A Volkov, P Macchi, L J Farrugia, C Gatti, P Mallinson, T Richter, T Koritsanszky, Volkov, A., Macchi, P., Farrugia, L. J., Gatti, C., Mallinson, P., Richter, T. & Koritsanszky, T. (2006). XD2006, A Computer Program Package for Multipole Refinement, Topological Analysis of Charge Densities and Evaluation of Intermolecular Energies from Experimental or Theoretical Structure Factors. URL http://xd.chem.buffalo.edu/ Charge Transfer and Fractional Bonds in Stoichiometric Boron Carbide. S Mondal, Chem. Mater. 296194Mondal, S. Charge Transfer and Fractional Bonds in Stoichiometric Boron Carbide. Chem. Mater. 29, 6191−6194 (2017). . R Tran, Z Xu, B Radhakrishnan, D Winston, W Sun, K A Persson, S P Ong, Descriptor, Surface energies of elemental crystals. Sci Data. 3160080Tran, R., Xu, Z., Radhakrishnan, B., Winston, D., Sun, W., Persson, K. A. & Ong, S. P. Data Descriptor: Surface energies of elemental crystals. Sci Data 3, 160080 (2016). Editor(s): Robert A. Meyers, Encyclopaedia of Physical Science and Technology. 373-421Bare, S. R. & Somorjai, G. A. Surface ChemistryAcademic PressThird EditionBare, S. R. & Somorjai, G. A. Surface Chemistry, Editor(s): Robert A. Meyers, Encyclopaedia of Physical Science and Technology (Third Edition), Academic Press 373-421 (2003). Low-operating temperature ammonia sensor based on Cu 2 O nanoparticles decorated with p-type MoS 2 nanosheets. Y Ding, X Guo, B Du, X Hu, X Yang, Y He, Y Zhou, Z Zang, J. Mater. Chem. C. 94838Ding, Y., Guo, X., Du, B., Hu, X., Yang, X., He, Y., Zhou, Y. & Zang, Z. Low-operating temperature ammonia sensor based on Cu 2 O nanoparticles decorated with p-type MoS 2 nanosheets. J. Mater. Chem. C 9, 4838 (2021). Crystal plane-dependent gas-sensing properties of zinc oxide nanostructures: experimental and theoretical studies. Y V Kaneti, Z Zhang, J Yue, Q M D Zakaria, C Chen, X Jiang, A Yu, Phys. Chem. Chem. Phys. 1611471Kaneti, Y. V., Zhang, Z., Yue, J., Zakaria, Q. M. D., Chen, C., Jiang, X. & Yu, A. Crystal plane-dependent gas-sensing properties of zinc oxide nanostructures: experimental and theoretical studies. Phys. Chem. Chem. Phys. 16, 11471 (2014). Experimental Charge density Studies of Inorganic Solids in: Understanding Intermolecular Interactions in the Solid State: Approaches and Techniques. Monographs in Supramolecular Chemistry. S Mondal, Royal Society of Chemistry. 130Mondal, S. Experimental Charge density Studies of Inorganic Solids in: Understanding Intermolecular Interactions in the Solid State: Approaches and Techniques. Monographs in Supramolecular Chemistry. Royal Society of Chemistry 130−158 (2018). A Simple Approach to Nonspherical Electron Densities by Using Invarioms. B Dittrich, T Koritsanszky, P Luger, Angew. Chem. Int. Ed. 43Dittrich, B., Koritsanszky, T. & Luger, P.A Simple Approach to Nonspherical Electron Densities by Using Invarioms. Angew. Chem. Int. Ed. 43, 2718-2721 (2004). Electron densities by the maximum entropy method (MEM) for various types of prior densities: a case study on three amino acids and a tripeptide. S J Prathapa, S Mondal, S Van Smaalen, Acta Cryst. B. 69Prathapa, S. J., Mondal, S.& van Smaalen S. Electron densities by the maximum entropy method (MEM) for various types of prior densities: a case study on three amino acids and a tripeptide. Acta Cryst. B 69, 203-213 (2013). C(2) C(1)-O(1) C(1. C , C(1)-C(2) C(1)-O(1) C(1)-O(2) C(2)-N Symmetry related atomic co-ordinates for B sites in α boron: X2 -y, x-y, z; X3 -x+y,-x,z; X4 y,x,-z. X5 x-y,-y,-zSymmetry related atomic co-ordinates for B sites in α boron: X2 -y, x-y, z; X3 -x+y,-x,z; X4 y,x,- z; X5 x-y,-y,-z; Symmetry related atomic co-ordinates for B sites in boron carbide: X1 x,y,z; X4 y,x,-z. X6 -X,-X+y, X5 x-y,-y,-zX6 -x,-x+y,-z Symmetry related atomic co-ordinates for B sites in boron carbide: X1 x,y,z; X4 y,x,-z; X5 x-y,- y,-z; Symmetry related atomic co-ordinates for Si sites in CaSi: X2 -x,-y,1/2+z. X6 -X,-X+y, X9 1/2+x,1/2+y,zX6 -x,-x+y,-z Symmetry related atomic co-ordinates for Si sites in CaSi: X2 -x,-y,1/2+z; X9 1/2+x,1/2+y,z;
[]